6-Minute Monty Explainer Video

I’ve been working through the TBP Core videos and the recent Monty 2025 updates—that’s 25+ videos to get oriented. I wanted a way to consolidate the main ideas about how Thousand Brains Theory actually manifests in Monty’s architecture, so I made this 6-minute explainer.

Understand that I’m not looking to jump into coding. Instead, I’m hoping to contribute questions and ideas at a more conceptual level. Many of the videos are quite helpful, and I love the openness of the brainstorming sessions. But it is a lot of weeds and I need to make sure I’m seeing the larger picture as I catch up. I use this approach in my own development work to echo back the my main points to see if they’re getting across.

My process: I used the Dia browser’s chat feature (ChatGPT, I think) to pull out key takeaways from each video, then fed those to NotebookLM to synthesize into a video “explainer.” It’s mainly a learning tool for myself which I’m sharing in case it helps others trying to get their bearings.

Sources: TBP Core playlist (2021-2024) + Monty 2025 update videos
Fair warning: This is just my attempt to make sense of things, not official TBP documentation. If I’ve misrepresented anything, let me know—I can update the video description or take it down.

Here’s the link: https://youtu.be/yFELNz7pyYY

3 Likes

I’m new to this topic, is there any roadmap with tutorials about TBP?

@Trung_Nguyen I’m new to TBP myself. There is a wealth of information available. You might take a look at their organized set of playlists (https://www.youtube.com/@thousandbrainsproject/playlists) where you’ll find a playlist for the Core concepts and even a Getting Started. I suppose it depends on your interest in TBP.

Personally, I’d start with Hawkins’ book ,“A Thousand Brains: A New Theory of Intelligence,” before diving into the project and Monty. But, I’m new too, as I said. Others may have different suggestions. Good luck!

2 Likes

A couple of other good videos to check out -

A nice overview of cortical columns by Artem Kirsanov:

And the future applications presentation by @vclay

1 Like

@Bryce_Bate Thanks for putting this together, it’s always good to see people trying to understand Monty and the ideas behind it.

That said, I do want to gently flag a general issue with AI-generated videos / summaries. LLM-generated content often produces something that sounds coherent and authoritative, but it tends to mix accurate points with subtle inaccuracies, especially when the subject matter is outside the distribution that the LLM was trained on.

A few examples of this:

  • The initial point that “compound vs lens eyes that lens eyes are much more complicated.” This kind of muddies the actual point, that once you chose a path it is almost impossible to switch paths.
  • The narrator says “It famously struggles with catastrophic forgetting” and this seems to imply that there is only one issue with deep learning when there are many, many issues.
  • Monty is built on more than two core principles.
  • The comparisons shown mix older and newer versions of Monty; the more relevant comparison (here in our paper) is against Vision Transformers.
  • It implies that reference frames are sent between modules, which is not happening.

And a few more which I wont list here for brevity. None of this is meant as a critique of the intent, which we think is great, just a caution that AI-generated summaries can give a false sense of understanding, especially for systems like ours that depart significantly from the mainstream.

For anyone interested, we’ve put together some guidance on the use large language models. Large Language Model Use

We do agree that there needs to be more content like the video you produced to shorten the ramp up period and we’re going to be working on that over 2026.

Stay tuned!

1 Like

I appreciate your statements of caution, @brainwaves, and I agree with you. For example, the somewhat longer “deep dive” identified 3 core principles. It wasn’t as constrained by its target content length as was the “explainer” using exactly the same source material. So, it added more detail to the summary.

Your point about mixing “accurate points with subtle inaccuracies” is well-taken. It’s a challenge not only for LLMs but also for humans with our many blind-spots. And, we must be watchful of the way in which mistaken views slip into the conversation.

But consider Wired’s “5-Levels of Explanation." Each account offers conceptual scaffolding suitable to the level of comprehension of the listener. In doing so, there are inevitable technical inaccuracies seen by some. But, explaining to my young grandson that the Earth is round and not flat (as some once believed) is to use language suitable for the age and occasion and not to strive for more precision by calling it an “oblate spheroid” (I had to look that up. Ha!).

My main point is that no one should think that a 6-minute summary of 25 videos (which, in turn, encapsulate years of research, writings, and countless conversations) is more than it is. The map is not the terrain. I value the details. And, in the right context, they matter a great deal.

Leopold wrote, in Sand County Almanac, about a passerby driving in the country only seeing fence lines with weeds, whereas he (Leopold) saw entire flourishing eco-systems. For all of us interested in understanding more about the most complex organism in the universe, you need a lot of scaffolding to reach even a partial level of understanding that others have achieved.

Again, I really do appreciate the clarifications and cautionary note about the generative summaries. Had I summarize those 25 videos myself, I most likely would have made all those mistakes, confusions, and more. Those ideas evolved and changed in places. It’s just difficult if you weren’t there as they were happening over a long time. Where’s the Vulcan mind-meld when you need it?

3 Likes

Oh yeah! I remember that concept and being impressed. This would be interesting to keep in mind as we build out our future material.

Thanks for the thoughtful reply.

I think Vulcan mind-meld will be after we complete Theory of Mind in Monty. :vulcan_salute:

1 Like