2021/11 - Initial Outline of the Requirements of Monty Modules

Jeff provides the definition of terms: Pose, Body, Sensor, Feature, Module (now called Sensor Module), Objects.
He explains how voting explains one-shot Object recognition as well as considering Modules are arranged in a hierarchy.

Additional discussion focuses on open questions of understanding hierarchy, motor behavior, “stretchy” graph, states, models in “where” columns, and feature discrepancies.

2 Likes

Vocabulary question: is a Monty Module (from the video title) different from the Module / Sensor Module you mentionned in your message?

1 Like

Great question! These terms were a bit fuzzy when we were first thinking about them. Currently, we don’t use the term Monty module. Its always either sensor module or learning module. The sensor module turns raw sensory input into the Cortical Messaging Protocol (CMP), the learning module models the incoming (CMP compliant) data. We are also thinking about adding a third type of module: motor modules which turn CMP compliant goal states into actions that specific actuators understand.

3 Likes

Hi @brainwaves,
is the pdf document from the numenta website still up to date with all the content in the videos released or where there some major or minor changes? And another question, do you know which code language the open source project will be? Thank you a lot

1 Like

@weiglt Do you mean this document? https://www.numenta.com/wp-content/uploads/2024/06/Short_TBP_Overview.pdf

If so, yes, the majority of the videos are inline with the outline here. In the videos some of the names of the components were not yet finalized.

Once the documentation is live that will become the default place for all the most up to date information.

The code for our first reference implementation will be in Python.

1 Like

@brainwaves yes this is the one. Thank you for this information and it sounds great to me that python is the implementation language:)

But I think, the core modules LMs, SMs should be implemented in C++ for better acceleration.
Do you @brainwaves think the current Python implementation is also enough fast?

1 Like

@Binh great question!

At the moment our implementation is in the feature-build-out/exploration phase. We’re investigating which approaches work while also aligning with the principles of the Thousand Brains Theory. For that reason, we’ve chosen an expressive language with the kind of core libraries that Python offers.

That said, we do care about many iterations training takes, how many iterations inference takes, and how much data the system requires to function. We have a set of benchmarks that ensure that we’re always improving and no functional modifications to the code negatively effect our benchmarks. These benchmarks are published in our documentation so you can check those out once our code is live. Benchmark Experiments

Another aspect of performance is that, compared to deep learning, Monty requires orders of magnitude less data/memory/CPU to function. So while performance will be critical at some point, we don’t think it will be as all consuming as the race to make deep learning systems performant.

Lastly, I’d say that because Monty is a modular system comprised of Sensor Modules, Learning Modules, and Motor Modules, we can selectively decide to improve the performance of any part of the system. If we want to rewrite a learning module in Mojo or C, that would be simple, assuming that learning module uses the CMP to communicate.

Eventually, we think these modules should be rewritten at the hardware level so you can have chips that operate as fast as the state of the art in microprocessors. :rocket:

4 Likes

@brainwaves understood, thanks.
I have just looked at the current version of Monty and it has almost no relation to any module of HTM concept like Encoder, SP, TM, GridCells based Location Modules.
Why does not Numenta use HTM in TBP?

1 Like

Great question @Binh

The reason we are not using Hierarchical Temporal Memory (HTM) or the Spatial Pooler (SP) is that to start, we wanted to have very explicit and easy-to-visualize / debug representations so we can figure out the overall structure and messaging protocol of this new framework. However, the ideas are not contradictory at all, and we definitely imagine having HTM + grid-cell-based learning modules (LM) and incorporating the spatial pooler. The system is designed so that each component can easily be customized as long as it adheres to the cortical messaging protocol (CMP) we defined, so even today, you can get started implementing an HTM-based LM.

We still have many other research questions to work out where more explicit graph representations are useful, so we are sticking with this LM version for now.

Here is a bit more on the topic of TBP vs. HTM: FAQ - Thousand Brains Project

We also implemented an HTM-based LM a while ago: monty_lab/temporal_memory at main · thousandbrainsproject/monty_lab · GitHub

@brainwaves thanks for your explain.

1 Like

I hope when you wish to improve Monty’s performance, you give consideration to Julia ( Python To Julia For Data Scientists, thereby maintaining Python’s expressiveness, but gaining C.s performance (& all Python’s libraries are easily callable from Julia)

1 Like