Hi all,
This is my first time posting, so I don’t want to assume much here because I’m new to TBP and new to computational neuroscience. I’m an embedded systems engineer by trade. But this project excited me and I want to be a part of it in any way I can!
I’ve been diving deep into Monty and the Thousand Brains framework, and I’m exploring an idea I think might be worth developing into a formal RFC – but I’d love to get early feedback first to see if others think it makes sense.
What would it take to implement a phase-based learning module (PhaseLM), and should it be a core architectural direction?
The basic idea: rather than relying on index-based or world-relative coordinate systems to track feature locations, a PhaseLM would represent where a feature is sensed via circular phase spaces – inspired by grid/ring cell behavior. Phase would rotate as a function of movement, and when combined with feature signatures (the “what”), you’d get a natural pairing of [coefficients, phase] that could scale up into compositional object hierarchies.
Phase provides a continuous low-dimensional way to represent senorimotor space without requiring full map-like representations, getting around the curse of dimensionality problem.
It aligns closely with how biological systems might encode motion and repetition in reference frames. And most importantly, it’s foundational. If this system proves viable, it would affect how reference frames, compositionality, voting and even memory systems are built on top. So it makes sense to explore it early before higher-level mechanisms are too baked in.
I’m not proposing a full rewrite – just an optional PhaseLM module that could live alongside the existing GraphLM and demonstrate some basic phase tracking and prediction first. If it works welll, it might bootstrap more complex capabilities like phase-based graph alignment, feature voting, and reference frame composition.
I also watched a video where Hawkins discussed the benefits of a phase based system, although I can’t find it right now.
Proposed structure for PhaseLM
phased ring buffers (RingModule) each phase dimension is represented as a circular buffer (heading, position, curvature, etc)
- Movement updates rotate phase values using learned transformations from motor input
- phase acts as local cyclic representations of relative location within a reference frame
Feature Coefficient Store
- Stores compressed representations of feature inputs (wavelet or sparse encodings)
- Each coefficient set is indexed by a phase tuple
Phase-aware association graph
- Graph where edges link phase locations to specific feature coefficients
- Compositional objects are built by linking phase-anchored sub-features together.
Phase voting and alignment
- When encountering a feature, the column attempts to align it’s current phase against known patterns to predict what feature should be sensed.
- Other columns can vote based on phase alignment to converge on a shared reference frame.
Motor-phase transformation functions
- learned mappings from motor command deltas to phase deltas
- may be initialized with simple oscillators and refined over time using sensorimotor feedback
Reference Frame Anchoring
- Columns anchor their phase spaces to reference frames derived from stable parent features (e.g. object surfaces or boundaries)
- Phase-relative locations allow nested, compositional hierarchies across features
Questions for the community:
- Has this kind of designed been explored internally already?
- Would love some feedback on drafting an RFC for PhaseLM if the design and early prototype looked promising
- What design constraints or compatibility requirements should be considered if this were to integrate cleanly into Monty?
Would love to hear thoughts, pushback or any relevant prior efforts. Thanks for building such a well thought out system, it’s early days but I’m really excited about the direction of this.