Although Monty’s design is informed by neurology (and, more broadly, biology), it’s clearly able to use other approaches when they “make sense”. Most obviously, there isn’t any Python code in the neocortex.
Put another way, my impression is that @jhawkins is OK with the use of non-biological approaches, as long as the trade-offs are well considered. So, I’d like to explore the use of “helpers and hints” in Monty.
Helpers
Monty’s Sensor Modules and Motor Systems serve as “helpers” for the Learning Modules, freeing them from low-level concerns. Indeed, they remind me of device drivers:
… A driver provides a software interface to hardware devices, enabling operating systems and other computer programs to access hardware functions without needing to know precise details about the hardware being used.
A Sensor Module could also perform various calculations and/or transformations on the incoming information. For example, it might calculate assorted statistics, do log scaling, perform Fourier Transforms, etc.
Even in the Learning Modules, some helpers might show up. For example, I suspect that assorted math libraries could help with calculating pose information, etc.
Finally, subsystems could perform ancillary tasks (e.g., simulating the hypothalamus, monitoring Monty’s activities). So, what other “helpers” might a Monty system include?
Hints
There are various ways that “hints” might be given to Monty, including:
- providing feedback, goals, supervised learning, etc.
- starting with pre-connected sets of modules
- “seeding” data values in Learning Modules
My take is that all of this is OK, as long as the researchers are aware of the biases and hints they are providing. Indeed, it could be argued that some of these are inspired by the genetic encoding of instinctual behavior…