I’m interested in starting to apply Monty to create Learning Modules that mimic the function of the posterior parietal cortex, which is believed ( very simplistically) to provide proprioception and essentially a model of our entire body and its sensors. The model would understand its body. These modules would enable the system to generate location data from sensor patches—something essential for the Thousand Brains Theory to function effectively. This Learning Module would then serve into the CMP the poses for any other module to use.
Do you think this is an interesting approach? What’s your take on it? Where do you think its best to start? Any suggestions would help a lot. Maybe any roadmap element I could start working on that is related to this?
Hi Migue, that’s great to hear. This is definitely something we’re interested in exploring, although we’re still debating exactly how this would fit in, the problems it would be most important for solving, and how we would implement it. For example, the agents that Monty currently controls are very simple (single actuators), so there isn’t much to actually model.
However your question relates to a couple interesting things which are nearer term on our roadmap. These are:
- Various improvements to our motor system, such as the ability for us to have multiple agents operating in parallel. Once we have this implemented, then we could work on implementing e.g. a distant agent observing a point of interest, where the distant-agent LM then sends a goal-state that moves a surface agent to that location in order to explore it further. You can eventually read more about this under our Future Work section, although these changes are in the process of being pushed via this PR.
- Augmenting our sensor-modules, adding the ability to detect and estimate flow (e.g. optic). This relates to your point about getting location information - i.e. we want Monty to be able to use flow to naturally estimate self-movement. This is also described more under Future Work i.e. the above linked PR.
I should note that we are currently also doing a major refactor of the motor system code-base, so (2) would probably be the safest one to work on in terms of merge-conflict risks.
Hope that makes sense, let me know if I can clarify anything!
Thanks, Niels, for the excellent answer! It clarifies a lot—really appreciate it!
My idea, at least for now, would be to try creating a Monty-based structure capable of generating the location and orientation of any frame for a simulated puppet-like robot using only internal sensor readings. This approach could provide a benchmark to validate the model’s performance in modeling itself and help extrapolate this methodology to any sensory system.
Regarding flow estimation in upgraded sensory modules, that is quite a focused task as well. Self-movement through visual flow is very clever and clearly present in the brain (e.g., the effect you experience when you’re on a stationary train, but the moving train beside you makes you feel like you’re the one moving).
I’ll start diving into these tasks because I think it’s a great way to familiarize myself with the Monty codebase, enjoy the process, and—if we’re lucky—contribute something meaningful.
Thanks again, Niels!
Amazing, looking forward to hearing how it goes Migue and let us know if there’s anything we can ever help with!