Niels presents a proposal for when to integrate additional neural elements into the Monty implementation. He focuses on two key areas: representations in the CMP, and reference frame representations. He outlines potential approaches for both, how close to biology they are, and what their advantages and drawbacks are. The team discusses these and the relevance of the different approaches on our research roadmap.
I just watched this video and found it fascinating. It brought out a lot of the tension between biomimicry and software engineering; it will be very interesting to see how this plays out over the life of the project.
I realize that this is an older video and that it probably wasnât originally intended as a presentation. That said, I noticed several things that should be addressed in future videos of this sort.
First, Jeffâs audio quality ranged from âOKâ to nearly unintelligible. Given his habit of leaning back and forth, perhaps he should use a head-mounted microphone.
Another problem I had was that only the comments from Jeff, Michael, Niels, and Viviane were accompanied by a video stream. Another (male?) speaker remained anonymous and unseen.
Finally, I found it very hard to concentrate on (say) Vivianeâs comments while the video was bouncing between slides, showing editing in progress, etc. It also seemed like Niels was trying to multitask between editing and talking, to the detriment of both.
All of that said, this was a wonderful way to be a fly on the wall at one of TBPâs working sessions.
The question of timing information in the CMP came up in the discussion. Iâd like to comment on this, because Iâve been thinking about it recently. My take, FWIW, is that the brain definitely uses timing information as part of its processing.
For example, note the way that phase info from the Theta wave that plays into scale invariance. More generally, it seems like signals coming in around the same time would tend to be processed together in some manner. I have no clue how all of this will be translated into code, but Iâm eager to find out.
At the same time, Iâd suggest putting some sort of timing information into the CMP, if only for purposes of tracking, visibility, etc. Here are some notions:
- a counter value (e.g., outgoing message 42 from this Actor)
- the Actorâs notion of the time (see NTP for reconciliation ideas)
- some variant of CRDTs, as used in Phoenix Presence
FWIW, here is an introductory talk on CRDTs:
ElixirConf 2015 - CRDT:
Datatype for the Apocalypse by Alexander Songe
Iâd also like to comment on the topic of arrays and vectors, as used in the CMP. One problem with these data structures, as pointed out by Rich Hickey, is that they âcomplectâ position and meaning. As a result, they arenât self-documenting and can be brittle in the face of changing requirements, etc. For details, see:
Rails Conf 2012 Keynote:
Simplicity Matters by Rich Hickey
Another problem is that they can be very inefficient for holding sparse data. A map (i.e., object, hash) can represent the presence of a single feature with a name/value pair; a vector, in contrast, would need enough positions to hold all possible features. Similarly, a 2D array could be both inflexible and inefficient for storing sparse sets of pixels.
I really appreciate the perseverance of Jeffâs pointing out common misconceptions. I agree that this cannot be re-iterated enough, because the tru-ish shortcuts can feel much more satisfying. It takes lots of concepts to fall into place in oneâs head to suddenly start to become intolerant to the potential dead ends, of which there are probably an infinity. And the ârightâ solution is, one could argue, unique, grounded in biology (disregarding computational pragmatism for a moment)
âReality is non-intuitive.â
I think the biggest hurdle is that an SDR is an abstraction of the state of the input connections of a neuron, and not a binary number. In the brain those connections are made and broken by the neuron, so the size of an SDR for a neuron will be changing over time, and what a member of that set is connected to will be changing as well.
One neuron isnât enough to get across whatâs happening. Creating a visualization for that basic function using a hundred or more neurons in a two layer hierarchy with I/O in a simple animation would probably help get the idea seeded.
I binged the Feynman Lectures recently and he repeatedly talked about how difficult it is to imagine the unimaginable. We still canât imagine what is really happening in the two slit experiment. The underlying simplicity of the brain repeated billions of times makes human consciousness possible using the power of a flashlight battery. The number of connections required on a neuron increases as the behavior - and mental activity is a hidden behavior - becomes more complex. Our ability to comprehend abstract concepts is not just from our neuron count being higher than other animals, the number of connections on each neuron is far higher, allowing for more complex structures to be created that are able to learn and navigate more complex models.
The complete connectome for c. elegans is available. Itâs the simplest brain we know of and only navigates physical space.The behavior of most animals is passed on genetically. The connectome of the flatworm brain can interact with the world already.
With a roadmap a team on a parallel track could be working on hardware design based on that structure.
Well said!
I plan on coming back and responding to this more later (a bit behind at work atm), but your comment on âhidden behaviorâ caught my attention. It appears that âprivacyâ may infact be an emergent proprety of self-organising machines in general. We get similar obscurity in things as simple as virtual embryogeny. There was a paper on this recently, regarding stress sharing, that you might find interesting.
Interesting paper. The Universe itself is self-organizing. The processes that are behind that are invisible to us and can only be inferred from their effects. Brains and by extension knowledge have the same problem. We see the results, but the processes producing those results are invisible to us. In addition to that our existing misconceptions about how reality functions are also invisible to us and distort our perception, which in turn distorts our model of the world and how it works. Itâs pretty amazing that weâve managed to figure out as much as we have.
On the topic of imagining the unimaginableâŚ
I keep returning to the idea of how easy it is to focus on things that are present and how difficult it is to perceive things that arenât present. In Incomplete Nature, Terrence Deacon proposes an entire hierarchy of increasing complexity based entirely on increasingly sophisticated mechanisms for maintaining constraints.
Absence has no components, and so it canât be reduced or eliminated. Or, to be a bit less cryptic: Constraint is the fact of possible states not being realized, and what is not realized is not subject to componential analysis. Reductive analysis can thus irretrievably throw away information about the basis of higher-order causal power.
â Deacon, T. W. (2012). Incomplete Nature (1st ed., p. 204). W. W. Norton & Company, Inc.