Thanks a lot Niels for your detailed answer!
One important prediction of the Thousand Brains Theory is that the huge number of neurons and the complex columnar structure found throughout brain regions including V1 does a lot more than simply identifying low-level features / high-frequency statistics like edges or colors.
To precise my thinking, I am not infering that a V1 column is simply identifying low-level features / high-frequency statistics like edges or colors. Rather than just identifying, I would say that it predicts the low-level features that will be sensed during the next iteration. This prediction does not seem that straightforward (and may explain the huge number of neurons in a single column) because it involves modeling at least three factors:
- Temporal sequences: a column can expect to see a specific edge at time T if it was sensing a given edge at time T-1 (could be implemented with your Temporal Memory algorithm). For V1 columns, they would be biaised to see the same edge if no lateral information is provided, but other columns may heavily rely on this mechanism to learn sequence. (~ receptive field inputs)
- Moving features: a column can expect to see a specific moving edge at time T if this moving edge was sensed by a nearby V1 column at time T-1 and no sensor movement is expected. It has to learn its relationship with its neighbouring columns. (~ local lateral connections)
- Moving sensors: a column can expect to see a specific edge at time T if this moving edge was sensed by a distant V1 column at time T-1 corresponding to the efference copy of the forthcoming occular saccade. This also has to be learned. (~ long-distance lateral connections)
I agree that a “dumber” column that can only predict a few dozens of objects/concepts sounds restrictive. On the other hand, it allows to use SOM-like algorithms that facilitate learning and clustering with nice properties (continuity, fuzziness, stability, see my comment here 2021/11 - Continued Discussion of the Requirements of Monty Modules - #4 by mthiboust). It’s a trade-off I found interesting to make, but I’m open to revisiting my thinking based on your progress.