At the heart of the Thousand Brain Project (TBP) is a mission to build a groundbreaking type of AI based on how our brain works. We see ourselves as an inseparable part of this community, and what we expect and desire of you mirrors what we expect of ourselves: collaboration, curiosity, and commitment to accelerating this mission together.
While the specifics of how we achieve this will evolve - and we’re open to suggestions! Here’s a breakdown of our short, medium, and, more nebulous, long-term goals:
Short Term (and ongoing): Building Credibility, Community, and Codability
Our vision:
Establishing TBP as a foundation for research, collaboration, and meaningful contributions.
How you can help:
PhD and Master’s Students (AI/Computer Science/related fields): Incorporate TBP into your thesis - explore specific aspects of the theory and validate them.
Technical Contributors: Help refine the codebase! This could include:
Refactoring and improving code for usability.
Creating tutorials, videos, or other educational content.
Enhancing the test suite, sensor modules, or contributing to roadmap items.
Non-Technical Contributors: Assist with documentation, making the project accessible, understandable, and shareable. Help grow and engage the community.
Neuroscientists: Review research materials, provide feedback, and share evidence or alternative perspectives on our problem-solving approaches.
Medium Term: Robots and Real-World Applications!
Our vision:
TBT’s principles applied to sensorimotor systems that can interact with and manipulate their environment.
Roboticists/Mechatronics Experts: Collaborate on building simple robots to demonstrate principles in action. Over time, these systems will exhibit more complex, functional behavior.
Technical Community: Take our software, integrate it with hardware, and demonstrate real-world applications to inspire and grow the community further. Expand the capabilities of TBP software by:
Creating new learning/sensor/motor implementations.
TBP technology becomes as commonplace as the electrical grid - accessible, reliable, and transformative.
Intelligence systems will be industrialized, solving humanity’s challenges while being easy to use without requiring a deep understanding of their inner workings.
The Purpose of this Forum
With this Discourse server we hope to provide a centralized place for people who would like to support our project or find out more about it to ask questions and provide feedback. We hope that this forum will help people find answers needed to start contributing to this project. It is also a place where researchers, such as neuroscientists, can provide feedback and input on our ideas. Lastly, we hope that it will be a friendly and communal place where likeminded individuals can share their TBP-based projects, find collaborators, and support each other in building the future of AI.
I hope this is the right place for my query. If not, feel free to move it!
I’m considering using the TBP/Monty framework for my final-year undergraduate project in Robotics, Mechatronics, and Control Engineering, which I’ll start around October 2025. Initially, my project idea involved using machine learning techniques with an EEG (and possibly an ECG) to control a robotic arm and gripper. My goal was to explore whether neuroplasticity in the brain could enable fine control of additional or replacement appendages.
However, I’ve realized that current EEG technology might not be precise enough for accurate motor control of a robotic gripper. Given this limitation, I’m pivoting my focus to developing a sensorimotor learning system for a robotic limb, using TBP as a foundation. I’m still unsure whether to completely drop the EEG aspect but am certain I want to integrate TBP into my project and work towards your medium-term goal.
I’d love to hear any advice or suggestions from the TBP community regarding this project. Am I biting off more than I can chew? Should I drop the EEG component altogether? Any guidance would be greatly appreciated.
The precision issue is an interesting one. You could consider a hybrid approach. To temporarily deal with the signal accuracy issue, you could utilize an EMG on the muscles of an existing limb (this could be particularly useful in the case of a prosthetic replacement). You should get much more precise control signals this way than you would with EEG. And on the plus side, this is already pretty common practice with prosthetics control, so you wouldn’t have to start at ground zero in your research.
In addition to the EMG, you could still use an EEG. Not for precise control of the robotic limb, but for monitoring brain activity patterns during Monty’s learning process.
Obviously Monty would sit between these two, learning how specific EEG readouts correspond with downstream EMG control. I would think that, eventually, Monty wouldn’t even need the EMG feedback. It would be able to predictively process intent based on learned EEG patterns. In my mind the relationship here would almost be like that of an Octopus’s central brain and the “mini-brains” found in its limbs.
Not sure if the above process would work at all. But its an approach I’d consider.
I’m really excited to hear that you want to use TBP in your final-year undergraduate project and am happy to chat more to figure out a reasonable scope and application!
I think using Monty to control a robotic arm would be very cool! Do you have any more specific task in mind? One thing to highlight here is that currently all of Monty’s policies are focused on inference or learning. We don’t have any policies aimed at manipulating the world.
Since the Monty framework is structured very modular there is nothing that speaks against you writing new policies but one of the reasons that we haven’t added policies to manipulate the world yet is that we are currently working on another basic requirement for this: modeling object states and behaviors. Hopefully the implementation will be more advanced on that regard by the time you start your project in October but it might be safer to plan on a project that is aimed at inference. For example moving a sensor attached to the robotic arm around an object to recognize it (or learn about it).
I am not sure how EEG could be integrated into this setup since those are not moving sensors. Maybe you can explain a bit more about what kind of setup you were thinking of there.
Please feel free to start a new topic in the Code>Projects category Projects - Thousand Brains Project so we can have more in-depth discussions about it.
Best wishes,
Viviane
P.S. One more thing to note is that by then, we plan to publish a “Monty for robotics starter kit,” which should make it easier for you to get started on building a demo. It would be super cool to feature your project on our Project Showcase page!
Thanks for the suggestion! An EMG sounds like a solid option—I was considering ECG for similar reasons, but an EMGs precision for motor control would be a better fit.
Also, I didn’t realize octopi have de-centralized brains! I’ve spent way too long this evening reading up about it!
I do like the concept, but I don’t know how the interaction between the EEG and EMG would work through Monty, I figure there needs to be some sort of moving sensor to be able to incorporate Monty. Might be something a future iteration of Monty is capable of when abstract concepts like data output are being tackled by TBP.
Still, many thanks for the ideas! They’re very helpful as I try to pull together a coherent project
Thanks for your support! I’m super excited about the possibilities with TBP and Monty, and it’s great to hear about the upcoming robotics starter kit, having that as a foundation will be a huge help. Do you know roughly what this would entail and when it might be out (I know that nothing is guaranteed!), it might help me with the direction I decide to take for this project.
For the task, I originally wanted something that could fully replace a human arm, or act as an additional one (Doc Ock style), picking up and manipulating objects, moving with the users command, and with a decent accuracy and precision - though I realise I was probably being incredibly optimistic here!
On the EEG side, my initial idea was to use it as the main part of the control mechanism for the arm, I’m interested in neuroscience and an EEG was mostly an excuse to look at the electrical activity from parts of the brain.
However, as much I would like an excuse to include it, an EEG might not be applicable to a project like this.
Using Monty for object inference with a sensor (or perhaps multiple sensors) attached to the gripper/hand of the robotic arm sounds interesting, but I have a sneaking suspicion that such a project might be achieved come October. I’d love to explore other ways to implement a sensorimotor learning system while keeping that idea as a fallback.
This is still quite conceptual, so I’ll flesh it out more when I find time over the next few weeks. I’ll make a topic in the Projects category and create a Gist as suggested, feel free to respond to this message over there.
Thanks again for your encouragement and advice!
Zach
My thoughts with the EEG implementation was that it would provide a kind of “baseline intent,” to which Monty could then interpret. That said, the prosthetic would still need to provide sensory input to Monty, so it would know what to perform work on/against.
That said, I think you and Vivian are correct. Incorporating EEG into this might be a bit too much to chew on. At least in the beginning.
Please do post up your project details eventually. It’s definitely cool!