Hi everyone,
I’ve been thinking about an idea recently—I’m not sure how useful or biologically plausible it is, but I wanted to share it and hear your thoughts.
Would it be beneficial for the voting mechanism if learning modules could also pass along the reasoning behind their votes? That way, other modules could potentially “fact-check” the reasoning from their own perspective. Of course, this wouldn’t apply in all cases, so it could be implemented optionally.
Also, a separate question (possibly unrelated, but still connected to the CMP extension as I understand it): could someone provide more context or clarification around the “future work” point ? It would help in deepening my understanding of the futuristic vision of the CMP message format.
Thanks for taking the time to read this! I’m open to any thoughts, suggestions, or critiques.
1 Like
@firemanc I’m going to move this into the Theory section.
And a question - what kind of reasoning are you thinking about? Reasoning quite often implies some kind of LLM bias and cortical columns are not equivalent in this way.
Thanks for the fast response!
Just to clarify — I’m not referring to “reasoning” in the same way we talk about it with LLMs. I’m thinking more in terms of how humans collaborate.
For example, when a group of people work together, person A might say to person B that they believe X (which I see as somewhat analogous to the voting mechanism in TBP). But often in those interactions, A will also explain why they believe X — either spontaneously or in response to B’s request.
So I was just wondering if something similar could be useful as an additional layer of abstraction — where one module provides its belief along with some rationale, and the receiving module can “fact-check” it based on its own current knowledge (its object model or, more generally, its model of the world).
1 Like
Hey @firemanc , good to see you on the TBP forums, welcome!
First Question
Interesting question about voting, is it possible to elaborate a bit more what you mean by “reasoning" behind their votes”? Maybe by working through a concrete example like voting on a mug?
Second Question
I’m glad you highlighted that Future Work section before starting to work on it - I would say that it is a bit outdated at this point (I’ll try to update the Readme soon). We have had some more concrete ideas on object behaviors, and we are moving away from anything that resembles graph-neural networks. Let me know if you need help finding some of these videos where we’ve discussed object behaviors.
This is a good thing, because it would enable an architecture that can learn behaviors with Hebbian, associative connections. The result is that learning would be much faster than typical gradient-descent methods.
Hope that provides some clarity, but feel free to elaborate on what you were wondering about the CMP messages.
Hi @nleadholm, thank you for the detailed response.
First Question
Let me work through the coffee cup example to illustrate what I mean by ‘reasoning behind votes.’
Currently, when learning modules vote, they share their evidence scores and pose hypotheses, but not the factors that contributed to those evidence scores. In the coffee cup scenario:
Learning Module A (thinking it sees a drawer pull) might have:
- High evidence from curvature matching (cylindrical shape)
- Low evidence from texture features (smooth vs expected wood grain)
- Moderate evidence from size/scale
- Evidence influenced by its viewing angle and lighting conditions
Learning Module B (thinking it sees the coffee cup logo) might have:
- High evidence from local texture patterns
- Low evidence from overall shape context
- Evidence based on different lighting/viewing conditions
The ‘reasoning’ I’m proposing would be sharing these evidence decompositions - not just the final evidence score, but what sensory features and contextual factors contributed to it. This could include:
- Which features matched well vs poorly
- How viewing angle/pose affected the assessment
- What alternative hypotheses were considered and why they were rejected
Potential benefits:
- Module B could recognize that A’s ‘drawer pull’ hypothesis is based mainly on partial shape information from a limited viewing angle
- A could learn that B has better texture information from its sensor position
- Both could weight each other’s votes more appropriately based on the quality and type of evidence
Second Question
I’ve been following some of the recent YouTube brainstorming sessions and the new ideas around object behaviors, so that might be part of why the GNN approach initially confused me—I was struggling to see how it fits into the bigger picture. Your clarification helped a lot, thank you!
1 Like
Great, thanks for that further clarification, and glad to hear the last point was helpful.
Re. the suggestion, in general we don’t want voting to share full models, or detailed inference information. This is for a variety of reasons, including
i) We can’t guarantee that they know the same models. Especially if the columns are from different modalities, they will look very different. Being able to “understand” the model of another column would require having access to the entire model.
ii) It isn’t plausible biologically that columns would share all of this information - the only thing they can vote on is high-level, summary statistics that are stable in time and tend to correlate with one another, such as the object “ID” and its rotation in space. This seems to be enough for the robustness of human perception, and would be much more computationally efficient than trying to share lots of detailed model and inference information. As such, we want to constrain the complexity of what we share.
Hope that makes sense, happy to clarify further.
3 Likes