Thoughts on helpers and hints

Although Monty’s design is informed by neurology (and, more broadly, biology), it’s clearly able to use other approaches when they “make sense”. Most obviously, there isn’t any Python code in the neocortex.

Put another way, my impression is that @jhawkins is OK with the use of non-biological approaches, as long as the trade-offs are well considered. So, I’d like to explore the use of “helpers and hints” in Monty.

Helpers

Monty’s Sensor Modules and Motor Systems serve as “helpers” for the Learning Modules, freeing them from low-level concerns. Indeed, they remind me of device drivers:

… A driver provides a software interface to hardware devices, enabling operating systems and other computer programs to access hardware functions without needing to know precise details about the hardware being used.

A Sensor Module could also perform various calculations and/or transformations on the incoming information. For example, it might calculate assorted statistics, do log scaling, perform Fourier Transforms, etc.

Even in the Learning Modules, some helpers might show up. For example, I suspect that assorted math libraries could help with calculating pose information, etc.

Finally, subsystems could perform ancillary tasks (e.g., simulating the hypothalamus, monitoring Monty’s activities). So, what other “helpers” might a Monty system include?

Hints

There are various ways that “hints” might be given to Monty, including:

  • providing feedback, goals, supervised learning, etc.
  • starting with pre-connected sets of modules
  • “seeding” data values in Learning Modules

My take is that all of this is OK, as long as the researchers are aware of the biases and hints they are providing. Indeed, it could be argued that some of these are inspired by the genetic encoding of instinctual behavior…

1 Like

Hi @Rich_Morin great thoughts, yes in general we are not too stuck on biological realism for lower level implementation details. The key tenants we hold on to are the higher level principles outlined in the Thousand Brains Theory, namely:

  1. Learning and inference is sensorimotor.
  2. Objects are represented with structured models.
  3. (1) and (2) are the responsibility of a general purpose processing unit (cortical column/learning module) that can learn models of complete objects.

However, we agree that we can introduce elements that are not as biologically plausible, where appropriate. Often this can be a stopgap for when we don’t actually know how biology does something (e.g., representing 3D reference frames in a column - is this done by grid cells or something else?). At other times, it could just enable a superhuman aspect of the system (e.g., being able to path-integrate thousands of hypotheses in parallel).

Just as humans are augmented with calculators, there are likely a variety of ways to augment Monty, particularly at the sensory and motor interfaces as you say. I don’t want to get in the way of any brainstorming so just thought I’d weigh in to basically say - we agree! The main thing is to keep track of the north star (points 1-3 above). In case you haven’t seen it, you might be interested in this meeting we had discussing how we balance biological plausibility with practical progress.

The final point worth mentioning is one that Jeff often highlights - the more we deviate from biology, the more likely we will end up with a fundamentally different (and in some ways limited) solution. Sometimes this can be ok, but other times you can end up in a deep local minimum with little room for escape. Deep learning is a good example of this - it started with classical models of biological neurons, then started relaxing assumptions about data distributions and locality of information, and ran away into a very different space. It’s obviously a useful technology, but it is now so fundamentally divorced from neuroscience that it is difficult for it to work back to be able to handle sensorimotor learning.

Anyways, looking forward to hearing what ideas people have for helpers and hints.

4 Likes

Hi

Perhaps a little off topic from helpers and hints but addressing the comment about Python in the neocortex and deep learning.

For some years now I have been wrestling with the problem of how to use artificial neural networks to control artificial creatures, entirely self-determinate robots, in the real world. I was hoping that Monty could be the platform I was looking for but unfortunately it doesn’t look like a particularly good fit. I am starting from the point of basic sensorimotor control and trying to build a world model from the ground up.

Early on it seemed obvious to me that such a controlling system must be made up entirely of data. Any code embedded in the control system would be vulnerable to bugs, inflexibility, compilation changes and future obsolescence. Only a pure data structure can remain independent of the means of processing it and consequently proof against the continually evolving world of hardware and software. The artificial brain structure and the means of executing it must remain independent from one another.

But still the principle of helpers applies. Conventional code is used to control sensors and convert their output into a common data format suitable for feeding into the artificial neural network and conversely the neural network sensorimotor output signals must go through conventional code to control the physical actuators. The conventional code also provides a deterministic back stop against potentially damaging outputs from the nondeterministic neural network.

I am in agreement with Jeff that deep learning has deviated too far from neuroscience, with its emphasis on software library generated uniform networks, minimal feedback and levels of trial and error training completely implausible in the real world. The wiring of biological neural networks is to some extent chaotic with an evolutionary driven mixture of learned and prewired behavior.

I have been unable to find any tools capable of generating and processing such networks and so reluctantly looking at having to create the tools myself.

At the risk of venturing far afield of inspiration by biological mechanisms, I’d like to discuss a possible pairing of hints and helpers…

Hints: Categories, Keywords, Tags, etc.

As a Monty system explores its environment, sets of modules will learn to recognize categories of objects (e.g., cups, handles, logos) and develop a graph connecting them. However, nothing in the sensed or low-level inferred information can tell us what an object might be called or what a graph link might represent.

If our goal were solely to replicate the (neo)cortex, this might be a reasonable limitation. However, if this limitation could be resolved, it might help to provide more visibility into Monty’s methods and results. More pragmatically, it might make production Monty systems more useful.

So, if we know things about how a Monty system is being trained, it makes sense to record that information (e.g., in CMP messages). The specificity of this information will vary, ranging from tags through keywords to categories. Regardless, it could help with human and/or AI-based analysis.

Helpers: Graph Databases

Monty’s generated “graph” is represented as objects (i.e., nodes) and relationships (i.e., edges), stored in the memories of the Learning Modules (LMs). Although this may let the LMs make inferences and such, it may not provide convenient support for searching and/or traversing the graph.

In the thread About Displacement Cells, I discussed some graph databases. I’d like Monty to use one or two of these to backstop its in-memory graph.

P.S. If anyone is interested in an learning about biological mechanisms, from the perspective of philosophy of science, I’d recommend reading In Search of Mechanisms.