Thoughts on emergent behavior in groups of cortical columns

Here, FWIW, is a short ChatGPT session on emergent behavior in groups of cortical columns:

I said:

ISTR that attempts to model collective behavior in animals (e.g., flocks of birds, schools of fish) were eased by the realization that the modeling needed to focus on a single animal’s strategies (e.g., place my wingtip(s) in a certain relationship to the neighboring birds). So, the behavior emerges from the context. It strikes me that the behavior of cortical columns may need to be thought of in a similar manner: until a substantial number of columns (and their neurons) can be simulated, the lack of context may stand in the way of modeling interesting behavior. Discuss.

ChatGPT said:

You’re remembering a key insight from the study of self-organizing systems: if you try to model the entire flock/school/herd as a global entity, the model quickly becomes unmanageable. But if you switch to local rules—what a single bird or fish does based only on its nearby neighbors—complex collective behavior emerges naturally.

This analogy maps very well onto cortical columns and the challenge of modeling the neocortex. …