Hello ThousandBrains community!
To briefly introduce myself, I have been following the work of Jeff Hawkins and Numenta since I read ‘On Intelligence’ many years ago. I had the honor to Skype once with Jeff when I was a young dev consultant, advising one project to try Numenta’s HTM (that was before Deep Learning was a thing).
15 years later, I am thinking about topics for a postdoc after my Ph.D. on the participatory design of telerobotic puppets. My experience in machine learning is limited (mostly using, some training and fine-tuning), but it seems that there is a dire need for more human-centered and sustainable approaches to creating AI.
In my vision, I imagine a participatory workshop where participants try to explain their deep thoughts about life to a ThousandBrains robot (or puppet), maybe teaching it how to behave in their culture. I realize that this is a far stretch, even for a long-term project, but maybe the first step could be to try and leverage existing LLMs to fast-track Monty into understanding language. Specifically, I’ve been wondering (and of course discussing with LLMs) if it’s possible to hook up some middle layer of a pre-trained transformer (maybe Llama?) and convert it to CMP so that Monty understands semantics but learns context and action-predictions on its own. I have seen some discussions of similar ideas in this forum but didn’t find any concrete implementation suggestions.
My hope is that this approach could be sustainable, since the big pre-training of LLMs is already done and the Monty integration might even run on edge devices? Also, I feel that providing an interactive learning experience that uses language could show the main thing that Monty can do and LLMs cannot, which is actual learning of concepts and behavior, not just maintaining a stack of text prompts as ‘memory’ (like in the film Memento).
Looking forward to your thoughts!
/Avner
