Paper: Task learning increases information redundancy of neural responses in macaque visual cortex

I stumbled upon an interesting paper that appears to provide some empirical support for both the distributed and heterarchical nature of Thousand Brains Theory. It revolves around another theory named “generative inference” and doesn’t mention TBT, but the results seem applicable to both.

PDF: https://drive.google.com/file/d/1irgi_2c335yurKHX0OscEQYEzOzi_TkQ/view?usp=sharing

Long summary from https://www.science.org/doi/10.1126/science.adw7707

Introduction

How does the brain transform sensory input into perception and behavior? The classic model guiding most of neuroscience and modern deep learning views perception as a largely feedforward process: Sensory signals are transformed from early to higher visual areas to make behaviorally relevant information more explicit. Feedback connections are thought to merely fine-tune this process—enhancing relevant features or suppressing noise during attention and learning.

An alternative framework, generative inference, posits that sensory processing is fundamentally bidirectional. In this view, neurons represent beliefs about causes in the external world, continuously updated by the exchange of information between sensory evidence (feedforward) and prior expectations (feedback).

Rationale

A recent theoretical prediction from the generative inference framework offers a decisive way to empirically distinguish these two models. Generative inference models predict an increased sharing of task-related information among sensory neurons while learning a perceptual decision-making task—manifesting as higher redundancy in their responses. This prediction directly opposes the classic model, which holds that learning and attention reduce redundancy and correlated variability to improve coding efficiency.

To test these conflicting predictions, we measured changes in information redundancy among neurons in visual area V4 of two macaque monkeys as they learned to discriminate between two orientations in two separate tasks (cardinal and oblique). Neural activity was recorded chronically using Utah arrays over weeks of training. We quantified information redundancy as the difference between the linear Fisher information carried by the intact population activity and that carried by the same population after removing correlations.

Results

At the start of learning, redundancy was near zero, indicating largely independent neural responses. Over the course of training, redundancy increased, ultimately reaching levels where roughly half of each neuron’s information was shared with other recorded neurons. Redundancy also increased dynamically within trials, over hundreds of milliseconds, consistent with the gradual accumulation and sharing of information. Increased redundancy did not result in a loss of information in the population but, instead, the individual-neuron information increased—both predicted by generative inference. Learning-related changes in redundancy were stronger during task performance compared with passive viewing on the same day, which suggests that the increase in redundancy owing to a redistribution of information depends on active task engagement.

Conclusion

Learning a perceptual task increased information redundancy among sensory neurons—a result that contradicts conventional understanding of the roles of learning and attention. Rather than eliminating correlated variability, learning appears to redistribute information across neurons through feedback and recurrent interactions, enabling consistent beliefs about the sensory world. These findings suggest that cortical sensory processing is best understood as a dynamic inference process—one that integrates prior expectations and sensory evidence—challenging the long-held assumption of a fundamentally unidirectional information flow during sensory processing in the brain.

4 Likes