Since the human brain’s prefrontal cortex is part of the ne-cortex, will the Thousand Brains Project seek to create mechanisms akin to the prefrontal cortex? If not, are the mechanisms the project produces still biased as in earlier computer models towards secularism and are essential amoral, given this cortex’s role in ethics and spirituality?
Thank you for your question @markellingsen! Our understanding is that the neocortex has a functional, repeating unit, called the cortical column, (about a hundred thousand neurons in each column, and you have about 150,000 columns throughout your neocortex). The cortical columns are strikingly similar everywhere you look in the neocortex. The Thousand Brains Project aims to fully understand how these columns function, and how they work together to form human intelligence. Currently, we’re primarily focused on the sensory areas of the neocortex, but because the columns in the prefrontal cortex are going to be very similar to the columns everywhere else, as we progress through our roadmap, we should indeed have a good idea of what is happening in the prefrontal cortex over time.
This doesn’t exactly address your question about the cortex’s role in ethics and spirituality, but the above is our starting point.
Thanks so much for this helpful response! But I do need to explore with you whether this has long-term implications for the findings of project and/or whether it rules out certain questions. First I wonder whether the profound similarities of the sensory areas (columns) of the neo-cortex entail a supposition on the project’s part that there are no unique qualities and operations of the prefrontal cortex as neurobiological research has hypothesized. If that remains an open and valid question for the project, and you do find that the prefrontal cortex does play an administrative, ethical, and seven spiritual role in the brain, would an AI not embodying these operations be vulnerable to some of the same problems the existing AI models have (though to be sure your projections come a lot closer to workings of the human brain).
A follow-up on my questions concerning whether you Thousand Brains Project model for AI is developing mechanisms which parallel the brain’s prefrontal cortex, have you been in dialogue with the UMass study reported in Jan 1, 2021 edition of Psychology Today reported on in The Harvard Gazette in July 2023 undertaken by the Samuel Gersham Computational Cognitive Neuroscience Lab. For this amateur it appears that these studies are relevant to your project, and if not could you explain why not as it would help me learn more about precisely what you aim to accimplish. (Full bibliographical details on these leads avaialble on request.
Hi @markellingsen! Sorry about the late reply, the team had been away for a couple of weeks for hackathon/retreat.
I tried to look up the specific study from Psychology Today / Harvard Gazette (direct link to the research article would be helpful!) but couldn’t find it. I did look at Samuel Gersham Computational Cognitive Neuroscience lab website, and indeed they seem pretty related to our project. To me, there are so many unknowns about the human brain that no one “single” research project will solve all problems, and all projects are steps towards this goal.
That said, @vclay or @nleadholm may actually have a better idea our research collaborators. And the bibliographical details would be appreciated! ![]()
Dr. Lee,
Thanks for your interest and the suggestion I contact Dr. Clay and Dr. Leadholm. Pleas provide their full email addresses if the addresses used in this email will not work. Here are the addresses for the research:
Regarding the work of Gersham, the article I found about that is:
Christy DeSmith, “Making algorithm used in AI more human-like”, in the July 12, 2023 issue of The Harvard Gazette at
https://news.harvard.edu/gazette/story/2023/07/,making-algorithm-used-in-ai-more-human-like/
The primary researcher seems to be Momchil Tomov. In April 19, 2023 he and colleagues produced an open archive article in Neuron titled “The neural architecture of theory-based reinforcement learning”. Need full address on that one?
Regarding the Psychology Today article it is
Cami Ross, “How a New AI Model Mimics the Brain’s Prefrontal Cortex” at
https://www/psychologytoday.com/intl/blog/the-future-brain.202101/how-new-ai-model-mimics-the-brain-s-prfrontal-cortex#:-text=in- a recent neuros…
The article is about research at the Salk Institute and the University of Massachusetts Amherst undertaken by Terrence Sejnowksi, Ben Tsuda, and Kay Tye.
For their articles, see Nov.5, 2020 Proceedings of the National Academy of Sciences . Its title is
“A modeling framework for adaptive liflelong learning with transfer and savings through gating in the prefrontal cortex”
I am especially interested in this latter research, because as long as AI fails to mimick the prefrontal cortex (the part of the human brain especially activated in ethical and spiritual AI would seem devoid of these capacities. I have already begun to identify AI results which seem to discount data drawn from spirituality religious institutions, suggesting a secular bias in its present modes. Are any of those issues on the radar-screen for your research?
With thanks for considering these matters and possible further dialogue,
Mark
N. Leadholm
Hi @markellingsen
as @brainwaves mentioned, one of the key premises of the thousand brains theory is that the neocortex implements the same basic algorithm everywhere, including in the prefrontal cortex. So we are not explicitly looking into modeling the prefrontal cortex, but at modeling the general computational mechanisms implemented in every cortical column. We are not yet at the stage where we model and analyze a large number of learning modules (our implementation of the basic cortical column algorithm) stacked hierarchically so that they would model the more high-level, abstract concepts you may find in the prefrontal cortex. But once we do, we would expect it to be consistent with findings from Gershman’s lab. As far as I can tell, their findings about causal modeling and updates to models based on prediction errors are consistent with our theory. We would expect this basic mechanism to happen everywhere in the brain, depending on what type of data is being modeled and predicted.
- Viviane
Thank you Viviane for this invaluable response. I will attend especially to your observation that “we are not explicitly looking into modeling the prefrontal cortex.” Thus as I report on your project to the community of religion scholars do I fairly represent the Thousand Brain Project as not attending to the question of whether AI can or should replicate the computational mechanisms of the prefrontal cortex’s role in spirituality and ethics? This issue then still seems not to be on the radar screen of cutting-edge researchers of AI. (I will still need to touch base with Gershman’s lab to clarify.) And in that case, Humanities scholars like me are right to worry about a possible secular bias (if not an amoral tint) to present AI models, even in exciting cutting-edge projects like yours. Please shoot me down on that conclusion, as I do not want to be right about it?
With profound thanks for our dialogue,
Mark
Hi @markellingsen
I’m not sure I understand what you are getting at. You seem to be looking at AI through a very specific lens, and it is hard for me to understand what kind of scientific answers you are looking for. If you understand our approach, I don’t see how your question makes sense.
To clarify what we do: Monty aims to be a general-purpose modeling system, similar to the neocortex. It learns models of whatever it is sensing and can then use these models to recognize previously seen objects or to plan. To make it more concrete, Monty currently learns models of everyday objects like cups and bowls by moving little sensor patches over those objects. It can then use those models it learned to recognize those objects and their poses again later on. It can also use the models to plan how to move its sensors. I don’t see how this relates to your question about religion.
Maybe the closest answer I can give is that once Monty is at a stage where it can learn about more abstract concepts (which it is not right now), it should be able to learn models of religions and ethics as well. However, how you utilize those models is up to the user.
I’m sorry if my answer is full of questions, but your question sounds to me like asking whether a map has ethics. If you want to report that we are not paying attention to this question, that is fine. I guess I just don’t understand how this question relates to what we are working on. I don’t think you have to worry about an amoral tint in our approach, as it is just learning models of the world. What you do with those models is up to the human user. (You don’t worry about computers or calculators being immoral, right?)
Best wishes,
Viviane