Since my reading of On Intelligence, when it first appeared, I suspected that the work of Thomas Cover, could advance the efforts in understanding and modeling the brain. My recent reading of A Thousand Brains, has helped me better form the case.
Beginning in 1980, I worked to develop one of the earliest fully automated trading systems implementing information theory based algorithms optimizing portfolio risk vs return in the face of uncertainty. As my team became aware of his work, later versions of the platform incorporated Tom Cover’s work in universal prediction and built a pretty early machine learning approach for finance.
Thomas Cover’s work in universal prediction, universal portfolio theory, universal source coding, and universal data compression shaped significant parts of modern statistical learning and optimization. (Note: he spent his entire professional career just down the road at Stanford). His research on “universal” approaches led to the development of algorithms that, without knowing the underlying probability distribution, would—over time—converge to the performance of an approach that knew the distribution.
A few links to more about Tom:
- Thomas Cover Information Theorist …, Stanford Engineering, https://engineering.stanford.edu/news/thomas-cover-information-theorist-and-electrical-engineer-dies-73.
- The Natural Mathematics Arising in Information Theory and Investment, https://www.youtube.com/watch?v=wsd8nsJg6D8.
- Elements of Information Theory, Thomas M. Cover, Joy A. Thomas, April 2005, https://cs-114.org/wp-content/uploads/2015/01/Elements_of_Information_Theory_Elements.pdf.
Cover’s work, stands out for its contribution to machine learning without relying on prior knowledge of underlying stochastic processes. This “model-free” innovation within the realm of sequential decision-making, allows for adaptation to unknown and even arbitrary behavior.
Once I know my way around the forum, I can post more on this. Suggestions on what thread appreciated.
I believe that Cover’s work can extend the A Thousand Brains description of the composition and operation of cortical columns, clusters of cortical columns, connections between clusters, and their respective reference frames.
Implemented, this could provide a framework upon which machines could form flexible world models, transfer knowledge across tasks, and adapt to changing conditions. The approach:
- Optimizes sequential decision making in the face of uncertainty irrespective of distribution.
- Aligns with Darwinian evolution.
- Avoids “ruin” fragility.
- Formalizes a temporal dimension to and across all reference frames that supports a coherent thread of experience.
I’ve come late to reading A Thousand Brains. The project may have already incorporated these ideas or ideas like them. If not, you may find them useful.
I know of nothing more important for the world than the work of the Thousand Brains Project.
More to follow.
- Andreas