I decided to take a plunge into the code. I forked/cloned the latest from the repository to my MacBook (M1) and then began running pytest. I think I’m in the final stages but the last test(s) is taking hours, running through Rosetta. However, it has given me an opportunity to do more reading about Monty and connecting some dots to concepts with which I have prior knowledge. I wrote some of this down and put together a 7-minute explainer using NotebookLM (The Gorilla, Rat & Robot: The New Science of How We Imagine). There are references in the description section. As always, in 7 minutes of compressed ideas it could make mistakes, just as I could. Still, it captures much of what has been swirling in my head.
The fact is this–I feel like I’m participating (be it only on the sidelines, so far) in something so extraordinary, so exciting, regarding not only the confirmation of TBT’s core ideas of sensorimotor traversal through reference frames, but of the real possibility that a machine could imagine its future, that I’m surprised it’s not headline news. And, as I thought about it, and the research by Benjamin Bergen on embodied simulations, I thought–maybe some are simply blind to the gorilla before them, like the study of radiologists for whom 83%, when presented with a (CT) lung cancer screening image of a “…gorilla, 48 times larger than the average nodule, [that] was inserted in the last case…did not see the gorilla.”
Perhaps where I’m seeing “machine imagination,” others frame it as:
• Hippocampal forward trajectory simulation (Johnson & Redish, 2007)
• Path integration and planning
• Hypothesis management and resampling
• Model-based policy execution
Forgive me if I’m sounding like I just discovered that water is wet and feeling compelled to share it. This is not to diminish the technical acomplishments or ongoing work. It’s simply to shed some light on something I’m not seeing. In 2005, David Redish did not initially acknowledge what his graduate student Adam Johnson was seeing when Johnson declared, “the rats are doing mental time travel.” This may go against the common view of human exceptionalism that keeps finding its way into conversations about AI. Maybe there’s a re-labeling that underlies how we talk about it so as to avoid push-back.
After I read Bergen’s “Louder Than Words” back in 2012 or so, I couldn’t help but think of language communication differently. I came to see it as a way of sharing (triggering?) embodied simulations in others to to produce similar “imaginings” or thoughts. Provided we had similar life experiences (sometimes made clearer with analogies), we could (to put it into TBT-speak) traverse similar reference frames.
We share embodied simulations through commonly held languages. And, as I’m learning, Monty has the Cortical Messaging Protocol (CMP). Many Montys could theoretically share their imaginings with each other. Think about the many implications. And, water IS wet!
Well, I just needed to share. Like they say, “[I] can make mistakes, so double-check it.”