A Possible Approach ... Darwin | Cover | A Thousand Brains

A Possible Approach to AGI.pdf (2.2 MB)

As promised.

Thoughts appreciated.

2 Likes

Interesting read, I particularly like the concept of hyper-tetrahedrons as the optimal solution
in an n-dimensional problem space, if I am understanding it correctly. Does rather beg the question
‘why aren’t all fund managers running this algorithm, sitting back and raking in the profits’,
or perhaps they are.

I struggle with how to get from sensory nerve signals, to hyper-tetrahedrons and back
to motor nerve signals.

A short thought experiment:

  1. Driving a car requires intelligence we presume, it’s not something even the smartest ape could do.

  2. If I train an LLM on all the information on cars, roads and driving there is in the world it will
    be able to correctly answer any question about driving a car you ask. Does it know how to drive a car ?

  3. I expect the answers will be yes to part 1 and no to part 2. Driving a car requires muscle memory and extensive situational awareness only acquirable through practical experience. The intelligence required and acquired eventually gets utilised at an almost subconscious level without the driver being aware of it at the fully conscious level a learner driver must use.

So, I have an artificial creature with a whole bunch of sensory nerves from all of the artificial muscles and eyes and ears and balance organs, and another bunch of motor nerves stimulating the artificial muscles. What goes in between is the AGI system, Monty, Cover or some such. But lower down it’s not an n-dimensional problem to optimise. It’s a hole where I went to place the foot, a step to climb over, a room to navigate. This feels more like an n-dimensional feedback control loop, something I like to call behavioural intelligence. Unfortunately the term is commonly used to mean intelligence about human behaviour, whereas I refer to creating behaviour in an artificial creature, which could be said to be intelligent behaviour. This control loop could be implemented in the form of a recurrent neural network with pliable hidden states.

The higher functions would then be built on top of the n-dimensional feedback control loop, modifying the control as it extracts more complex patterns from the sensory data and formulates more sophisticated responses. I think what I am saying is that something like an old brain implementation is required as an interface between a neocortex model and the real world, and it’s real world behaviour that we perceive, not unreasonably, as intelligence.

Fund managers operate primarily as risk takers and speculators imagining (against mountains of evidence) that they can beat markets. The most sophisticated ones attempt to lay risk off on other participants (this often looks like market manipulation, e.g., high frequency trading accessing information not yet available to the public).

Cover’s universal portfolio doesn’t have any excitement to it. It just works, positioning one to do as well as one can reasonably do and to do so with low volatility over the long run.

I think A Thousand Brains (or some Cover variant of it) does operate via n-dimensional feedback control loops. In my understanding, process wise they differ in the order of operations.

A Thousand Brains:

  1. encounters something,
  2. models the input,
  3. identifies (applying the just created model).

Cover variant:

  1. encounters something,
  2. identifies (applying the last created model), then
  3. updates the model for the next encounter.

A Cover approach operates faster in real time with the trade off of accuracy. That said, over the longer run, it gets to the same success in respect to the unknowable future. Cover’s algorithms operate like evolution, over time.

I think lots of high-level intelligence operates at a subconscious level.

Psychiatrist, Anton Ehrenzweig published The Hidden Order of Art: A Study in the Psychology of Artistic Imagination, in 1967. In sections of the work, he posits that complexity of certain tasks go beyond the capacity of a conscious mind to do them efficiently, effectively, or practically at all.

Yet we (humans) do them. Ehrenzweig observes that those that can get out of their own way access the deeper and broader capacity of our sub-conscious minds - the organizational powers of the sub-conscious do extraordinary things. We all have experiences of this. Creativity. Lateral thinking. Flow states. Inspiration. Epiphanies.

Federico García Lorca famously described Flamingo’s idea of duende as “…a power, not a work; a struggle, not a thought,” something outside of the individuals consciousness.

“No mind states,” or the concept of “no-mind”, from meditative traditions like Zen and Daoism (mushin or wuxin) described a state as not an absence of awareness or a complete cessation of all brain activity, but rather a state of pure consciousness, freedom from conceptual thought, emotional attachment, and the ego-based “thinking mind” which accesses the whole mind.

I don’t see that one needs much beyond the operation of reference frames (maybe + Cover) in silicon to replicate this.

Unless I’m mistaken, Monty doesn’t actually build a fresh model prior to object identification. It uses existing hypotheses to interpret sensation immediately, then updates the model if and when appropriate. So Monty’s operation order, to me, seems closer to the Cover variant you described.

That said, I like your framing of Cortical Columns as a kind of distrobuted MoE network. Thats a fun way of viewing it. Using simplex-geometry to model module concensus is also clever.

As for thoughts… I’d love to see an expansion of the memo’s ‘Innovation and Creativity’ section. To me, evolution-style reweighting optimizes that which already exists, but it doesn’t really explain how the system invents new experts. So from within this framework, where would you say abstration comes from? How is it achieved?

1 Like

The same way brains already do it – extending a reference frame to incorporate some combination of stuff that we hadn’t previously combined, which, in being combined, enables something or solves a problem we or even experts in a field have previously failed to solve.

We don’t create ex nihilo. We can only combine things or ideas about things that already exist in the world (world certainly includes the body of human knowledge about the world).

A story came down to me that illustrates this. P&G sent a group of summer interns to Washington, D.C. to look for things at the US Patent Office the company could use.

An intern, a young woman in her teens, came across a patent that described a paper “gortex” like material that water could pass through in only one direction. Being just paper she didn’t think it had sufficient substance to use for anything, e.g., you couldn’t make space suits out of it or waterproof shoes or jackets. She set the patent aside.

Later that day, the same intern came across a 2nd patent. This patent’s illustration looked like a cloud and it described a material that could absorb thousands of times its weight in water.

The intern, an experienced expert as a baby sitter, reportedly picked up the two patents, put them together, and invented Pampers and - as told to me - received a patent even though it incorporated two patents already filed.

Not certain if it really happened this way, but the story makes the point.

Certainly, innovation/invention/originality in any area of human endeavor can seem strange, when we don’t know all the bits that someone accessed to arrive at the innovation.

Patent applications require a review of “prior art”.

Magic tricks can’t do anything beyond physics, they just hide or divert us from seeing all the pieces.

I still find Shakespeare strange and wonderful.

I feel the same way about A Thousand Brains.

2 Likes

Great. I had only come to A Thousand Brains via the book and have only begun to dig into the more recent work.

I think I phrased this poorly. I agree with your stance here, I was more probing for how you’d propose doing it (or how you think the brain does it).

It’s been a minute since I really thought about it myself, but personally, I suspect the brain employs a kind of resting state manifold learning function, potentially observed in things such as DMN activity.