Question(s) on "How Neurons Interpolate Between Points" video

Here-s the clip https://www.youtube.com/watch?v=du7TkVtTb6o

One question here is what’s the point? Here-s what I mean by that:

  • we start from a SDR representing one point on a circle
  • some neurons (or dendrite segments more exactly) will “recognize” that SDR and fire.
  • All these neurons will fire at the same time in a new SDR but…
  • they will represent the same point on the circle in a different SDR or you can call it another “view” of the same point on the circle.

But what’s the point? The new SDR brings no extra information besides the old one. It carries the same information about a point position on the circle.

PS to be more precise, in what way the new SDR is more useful than the old one?

Ok now that this question got my neurons firing, here-s a potential mechanism which makes this useful.

Let’s assume 10 synapses of a certain segment learn to become a “6” detector, let’s call it the six-o-clock position detector.
The segment will activate when at least (e.g.) 6 or 7 synapses of the 10 fire simultaneously.

Let’s assume the same segment, through a different group of synapses learns to detect (and signal) V-shaped objects or patterns whenever they occur somewhere up on the sensory stream.

Now this segment will be able to detect two seemingly unrelated events - a “V” shaped thing anywhere in the visual field, OR something moving at 6-o-clock position in the sensory field.

Normally this is a confusing behaviour, since the segment cannot make the distinction between the two unrelated events.

But let’s assume there is a mechanism by which the segment could be partially inhibited. What that means - it means that whenever “6-o-clock” pattern only 3-4 synapses fire, and whenever “V” pattern happens only 3-4 synapses fire. Let’s skip for later the question about how is it possible to inhibit a neuron partially. Just assume it is possible.

That leads to the paradoxical result that after it has put out the effort to learn two different patterns, the segment will NOT respond to either one.

BUT, it will be able to notice the coincidence of having a “V” at 6-o-clock position even if it has never seen this compound pattern during learning. 3-4 synapses from each of the two unrelated patterns will push it past the activation potential.

Which when I think of it I find remarkable. If a segment learns 20 unrelated patterns individually (using ~200-300 synapses) and then its sensitivity is half-way dampened, then it will be able to recognize to 20*19 = 380 pair-wise coincidences!

That could be the mechanics by which a new, unexpected arrangement of known patterns draws our attention so powerfully and it is so easily self evident and persistent.

Maybe neuron’s purpose isn’t to learn patterns, but to wait for, and signal peculiar coincidences? The learning part is only the means through which this goal is reached.

Great question @blimpyway , this gets at a key aspect of representation learning, which is invariance vs equivariance. An invariant representation is one that does not change as a function of something in the world changing. For example, the fact that you recognize a chair as a chair is invariant to its position in a room, and to its rotation. In contrast, an equivariant representation does get modified as a function of the information from the outside world changing. For example, as you rotate a chair, your representation of its orientation in space is updated accordingly.

In intelligent systems, we want some representations to be invariant, and others to be equivariant, as both are useful under different settings. If your representation of rotation was invariant to rotations in an object, it would be useless, because it wouldn’t reflect changes in rotation - it would be fixed! If on the other hand your representation of object ID was not invariant to changes in illumination, you would think every object was different depending on whether it was under a strong or dim light, and you would have to constantly relearn objects.

Many representations are useful if they are invariant under small perturbations, and equivariant under larger ones. Location is an example of this - if a nose has shifted a few mm on a face, you still recognize a face, but if the nose has moved to the back of someone’s head, then something else is going on… What Jeff is describing is simply a mechanism for achieving invariance in location representations under small changes. Hope that makes sense.

Re. your follow-up example, I’m not sure I followed it completely but in general, prediction error and surprise is definitely a big part of learning and what neurons are trying to do - our brain is constantly learning where things are so that we can predict what we will see, and failing to do so helps drive us to learn new things.

1 Like