I am coming at this problem from a background in signal processing, distributed systems, networking and security. I am not a neuroscientist by any stretch of the imagination. So I may not be fully understanding the theory and goals of TBT, so apologies in advance.
Looking into TBT I see the potential of it to provide real time unsupervised learning. I see this as a exciting aspect of AI that is somewhat orthogonal to the current LLM/LRM approaches but complimentary. However I am struggling to understand how to generalize the Monty/TBT code for a broader set of use cases.
To provide a concrete example, I would like to build a realtime network security application that listens to network traffic and learns its behavior to determine anomalous events. Some work has already been done in this area trying to insert ML into the Linux kernel see: Machine learning-powered traffic processing in commodity hardware with eBPF.
Many security applications do use machine learning to do this today but they need training data and if a new type of threat arises then in many cases the training data needs to be updated and the code redeployed. So with a combination of static rules, heuristics and machine learning solutions are being deployed today. However wondering if we can it a lot better by using TBT/Monty that provides unsupervised learning and acts more like a human operator examining the network. The issue is that humans do not scale with the amount of nodes, sessions and packets flowing through a modern network it is not humanly possible to observe everything and then learn the patterns and make decisions.
The sensor model is fairly easy and can by built using eBPF (for Linux) and will provide a real-time session table that is updated as new packets are received by the system. The same with action model, it is fairly simple allow, deny, notify etc. Motion would be prompted by the sensor learning of other nodes and deploying sensors to them to build a graph of the network, i.e. moving to get different views. The challenge is I see how to generalize the learning model. I can write a specific learning model but this seems to defeat the goals of TBT to create a general model of AI.
I can see a similar problem with the existing vision sensor, if it is extended to a larger part of the electro-magnetic spectrum, new fields would have to be added for the sensor and new learning models added to detect different aspects of the electromagnetic spectrum. While it might require different sensors they are still just detecting spectral content - and while the sensors would go beyond human senses they may be a natural for for robotic applications.
So my core question is, what is the proposed method for creating generalized learning models. LLM are a mechanism for extracting meaning from large volumes of training data and while there is a lot of work in making them human consumable the core code is not rewritten for every use case.
I think my understanding must be off as the only approach I can see is to create custom code for every sensor and I am not sure this is scalable? Any help suggestions would be welcome.