We’re pleased to invite you to an online Meetup on December 17th from 9 AM to 11:30 PM PST, where Jeff Hawkins, Viviane Clay, and the entire Thousand Brains Project team will give a series of short talks covering the project, the progress we made since last year, how Monty works, it’s exciting capabilities, how to get started, and how to get involved. There will be a lot of valuable information, and we wanted to give you an early heads-up so you can reserve a spot.
This will be a great event to attend for anyone curious about the project and what we have been up to the past year, whether you know nothing about Monty, or have been following the project closely since last year’s symposium.
A big thank you to all the folks that responded to our poll about the event!
Agenda
The legend of Monty. Follow the Thousand Brains Project team on its journey to see how we got to where we are today. @vclay
Meet Monty. Fundamental principles of Thousand Brains Systems and how they relate to the neocortex. @sknudstrup
Getting started with Monty. Running your first experiment, including a brief intro to configs and Monty’s class structure, plus where to find more information. @hlee
Looking into Monty’s “brain.” Interactive visualizations to get an intuitive understanding of how Monty thinks and learns. @rmounir
Ways to contribute. How to find your path into the project. @brainwaves
Making a PR to tbp.monty. The PR process from idea to merge, including tests, style checks, and benchmarks. @jshoemaker
Our path to an easy-to-use platform. How we’re planning to evolve Monty into an easy to use platform and how the community can help. @tslominski
Next year and beyond. TBP’s mission and what we are working on next. @jhawkins
Q&A session
We hope you can join us. We’re looking forward to sharing our progress and discussing where the next year is headed. An interactive Q&A session will follow the talks, so please bring your questions!
Thanks and congrats to the TBP team for an engaging, well-run and super interesting event yesterday. It was wonderful to hear details of the latest progress, and the future work & Q&A discussions were impressive and got me (even more) excited for what’s to come.
First I would like to say a big thank you to the TBP team, great presentations and I now understand a lot more about the inner workings of Monty, which brings me on to my question. I have been bangin’ on about building a sensorimotor robotics system in various posts. I now have the first unit built and I will provide technical details in the ‘projects’ section early in the new year (@Zachary_Danzig as part of the design I came up with a novel touch sensor which is remarkably sensitive and cheap to make, please contact me if you would like to know more, could work well for finger tips).
The first challenge is, given the stream of sensor data from the legs, to stand up and walk. I don’t expect it to learn to walk from first principles, I will gift it the initial capability (genetic memory if you like) and from there it will need to adjust its sensorimotor melody of walking as its sensors dictate.
Given Monty’s focus on object recognition I am wondering if generating walking patterns is something at all achievable in Monty as it is today or am I currently living on a completely different island, without a boat ? Alternatively I am looking at some kind of hierarchical structure of RNNs.
Thank you team for your dedication into preparing this. It’s amazing all the progress that you’ve achieved in the past year alone. I think the saliency and 2D sensor modules are particularly promising in their potential to greatly extend Monty’s capabilities. Cheers to a fruitful 2026!
A sensorimotor robotics system sounds exciting, and we look forward to hearing more about what you develop. I think we need more motor-system-like sensorimotor robotics.
Regarding your question, generating walking patterns is not the focus of Monty’s at present, nor will it be in the future. The focus is to create Goals (a type of Cortical Message) that the MotorSystem will then make happen in whatever way is expedient. We rely heavily on hardcoding in the MotorSystem any and all means to enact the Goals. For the robot to stand up and walk, the MotorSystem would have to be capable of doing so, but from Monty’s perspective, it would only be outputting Goals.
The enactment of actions by a robot is a broad field with multiple paradigms, and we are not equipped to tackle it within the Thousand Brains Project. We aim to limit ourselves to cortical (and cortical-adjacent, e.g., hippocampus, thalamus) sensorimotor algorithms, reserving motor system sensorimotor algorithms related to the cerebellum, brainstem, and spine for others to develop. And, while we wait for others to create and improve those, we intend to integrate existing robotics state-of-the-art technology that’s available to enact the actions.
Whilst I’m not certain I’ll be including tactile sensing in my project (current focus is a stereo cam and ToF sensor array) I’d still be interested to hear about your touch sensor in case it’s something I get time to implement!