Hi all,
I used to follow Numenta quite closely about 5 to 6 years ago and was fairly familiar with HTM principles at the time. Since then, I haven’t kept up with developments, but I recently started exploring the Thousand Brains Project.
Maybe a naive question: From what I understand, TBP seems to describe a learning framework, including components like sensor modules, learning modules, voting mechanisms, etc. rather than a specific learning algorithm. Is that correct?
Also, is it reasonable to think of HTM as fitting within a learning module under the broader TBP framework?
Welcome to the community! Yes and yes, TBP is a learning framework based on the columns in the cortex and their connectivity whereas HTM Sequence Memory is a specific algorithm. It’s correct to think that HTM Sequence Memory may well be an algorithm used within a learning module - A bit more on that in our FAQs - FAQ - Thousand Brains Project
It’s a fair question as the learning modules are the part of the system we change the most as we build out implementations to mirror our research progress.
Currently there are the following Learning Modules
List of all learning module classes
Description
GraphLM
Learning module that contains a graph memory class and a buffer class. It also has properties for logging the target and detected object and pose. It contains functions for calculating displacements, updating the graph memory and logging. Class is not used on its own but is super-class of DisplacementGraphLM, FeatureGraphLM, and EvidenceGraphLM.
DisplacementGraphLM
Learning module that uses the displacements stored in graph models to recognize objects.
FeatureGraphLM
Learning module that uses the locations stored in graph models to recognize objects.
EvidenceGraphLM
Learning module that uses the locations stored in graph models to recognize objects and keeps a continuous evidence count for all its hypotheses.
They all do different things and implement different abilities. The common thing that allows them all to work together is the Cortical Messaging Protocol (CMP). Cortical Messaging Protocol
just a few more details to add to @brainwaves response:
The current LM implementations build on each other (they are all subclasses of GraphLM and we’re developed one of the other, each improving on the last version. You can find a more detailed comparison of them in this separate document if you are very interested) However, if you just want to get a general idea I would recommend looking at the EvidenceGraphLM as this is the most recent one we developed and what we are currently using for all our experiments. This page in the documentation delves a bit deeper into how this LM works.
At a very high level, the current LMs learn very explicit models by storing points at locations in a cartesian coordinate frame. Think of a 3D point cloud. This is a bit like fast, local, associative learning (hebbian learning) in the brain. We don’t use deep learning or global update rules. This comes with many advantages and is uniquely suited for learning from an ever-changing stream of sensorimotor inputs.
Contrary to HTM, the current models in our LMs don’t have a temporal component. We are actively working on this.
HTM could certainly be used as an algorithm inside an LM. In fact, we have already looked into such an LM implementation a couple of years ago at Numenta. You can find the code in our monty_lab repository here (although it is not actively maintained) One important thing to note is that the HTM algorithm needs to be combined with a mechanism to do path integration to keep track of how movement of the sensor takes us through the obejct’s reference frame and learn structured models. In the implementation I linked, we use a grid-cell-like mechanism for this.
Not sure if those extra details are useful but maybe they give a bit more context and additional links to places where you can dig deeper. Let me know if you have more questions!
Thanks a bunch for the response @vclay@brainwaves. And thanks for the amazing documentation as well.
I will dig deeper into the docs/code to understand them in detail.