Is the Thousand Brains Project creating mechanisms akin to the prefrontal cortex?

Since the human brain’s prefrontal cortex is part of the ne-cortex, will the Thousand Brains Project seek to create mechanisms akin to the prefrontal cortex? If not, are the mechanisms the project produces still biased as in earlier computer models towards secularism and are essential amoral, given this cortex’s role in ethics and spirituality?

Thank you for your question @markellingsen! Our understanding is that the neocortex has a functional, repeating unit, called the cortical column, (about a hundred thousand neurons in each column, and you have about 150,000 columns throughout your neocortex). The cortical columns are strikingly similar everywhere you look in the neocortex. The Thousand Brains Project aims to fully understand how these columns function, and how they work together to form human intelligence. Currently, we’re primarily focused on the sensory areas of the neocortex, but because the columns in the prefrontal cortex are going to be very similar to the columns everywhere else, as we progress through our roadmap, we should indeed have a good idea of what is happening in the prefrontal cortex over time.

This doesn’t exactly address your question about the cortex’s role in ethics and spirituality, but the above is our starting point.

3 Likes

Thanks so much for this helpful response! But I do need to explore with you whether this has long-term implications for the findings of project and/or whether it rules out certain questions. First I wonder whether the profound similarities of the sensory areas (columns) of the neo-cortex entail a supposition on the project’s part that there are no unique qualities and operations of the prefrontal cortex as neurobiological research has hypothesized. If that remains an open and valid question for the project, and you do find that the prefrontal cortex does play an administrative, ethical, and seven spiritual role in the brain, would an AI not embodying these operations be vulnerable to some of the same problems the existing AI models have (though to be sure your projections come a lot closer to workings of the human brain).

A follow-up on my questions concerning whether you Thousand Brains Project model for AI is developing mechanisms which parallel the brain’s prefrontal cortex, have you been in dialogue with the UMass study reported in Jan 1, 2021 edition of Psychology Today reported on in The Harvard Gazette in July 2023 undertaken by the Samuel Gersham Computational Cognitive Neuroscience Lab. For this amateur it appears that these studies are relevant to your project, and if not could you explain why not as it would help me learn more about precisely what you aim to accimplish. (Full bibliographical details on these leads avaialble on request.

Hi @markellingsen! Sorry about the late reply, the team had been away for a couple of weeks for hackathon/retreat.

I tried to look up the specific study from Psychology Today / Harvard Gazette (direct link to the research article would be helpful!) but couldn’t find it. I did look at Samuel Gersham Computational Cognitive Neuroscience lab website, and indeed they seem pretty related to our project. To me, there are so many unknowns about the human brain that no one “single” research project will solve all problems, and all projects are steps towards this goal.

That said, @vclay or @nleadholm may actually have a better idea our research collaborators. And the bibliographical details would be appreciated! :folded_hands:

1 Like

Dr. Lee,

Thanks for your interest and the suggestion I contact Dr. Clay and Dr. Leadholm. Pleas provide their full email addresses if the addresses used in this email will not work. Here are the addresses for the research:

Regarding the work of Gersham, the article I found about that is:
Christy DeSmith, “Making algorithm used in AI more human-like”, in the July 12, 2023 issue of The Harvard Gazette at
https://news.harvard.edu/gazette/story/2023/07/,making-algorithm-used-in-ai-more-human-like/
The primary researcher seems to be Momchil Tomov. In April 19, 2023 he and colleagues produced an open archive article in Neuron titled “The neural architecture of theory-based reinforcement learning”. Need full address on that one?

Regarding the Psychology Today article it is
Cami Ross, “How a New AI Model Mimics the Brain’s Prefrontal Cortex” at
https://www/psychologytoday.com/intl/blog/the-future-brain.202101/how-new-ai-model-mimics-the-brain-s-prfrontal-cortex#:-text=in- a recent neuros…
The article is about research at the Salk Institute and the University of Massachusetts Amherst undertaken by Terrence Sejnowksi, Ben Tsuda, and Kay Tye.
For their articles, see Nov.5, 2020 Proceedings of the National Academy of Sciences . Its title is
“A modeling framework for adaptive liflelong learning with transfer and savings through gating in the prefrontal cortex”

I am especially interested in this latter research, because as long as AI fails to mimick the prefrontal cortex (the part of the human brain especially activated in ethical and spiritual AI would seem devoid of these capacities. I have already begun to identify AI results which seem to discount data drawn from spirituality religious institutions, suggesting a secular bias in its present modes. Are any of those issues on the radar-screen for your research?

With thanks for considering these matters and possible further dialogue,
Mark

N. Leadholm

Hi @markellingsen
as @brainwaves mentioned, one of the key premises of the thousand brains theory is that the neocortex implements the same basic algorithm everywhere, including in the prefrontal cortex. So we are not explicitly looking into modeling the prefrontal cortex, but at modeling the general computational mechanisms implemented in every cortical column. We are not yet at the stage where we model and analyze a large number of learning modules (our implementation of the basic cortical column algorithm) stacked hierarchically so that they would model the more high-level, abstract concepts you may find in the prefrontal cortex. But once we do, we would expect it to be consistent with findings from Gershman’s lab. As far as I can tell, their findings about causal modeling and updates to models based on prediction errors are consistent with our theory. We would expect this basic mechanism to happen everywhere in the brain, depending on what type of data is being modeled and predicted.

  • Viviane

Thank you Viviane for this invaluable response. I will attend especially to your observation that “we are not explicitly looking into modeling the prefrontal cortex.” Thus as I report on your project to the community of religion scholars do I fairly represent the Thousand Brain Project as not attending to the question of whether AI can or should replicate the computational mechanisms of the prefrontal cortex’s role in spirituality and ethics? This issue then still seems not to be on the radar screen of cutting-edge researchers of AI. (I will still need to touch base with Gershman’s lab to clarify.) And in that case, Humanities scholars like me are right to worry about a possible secular bias (if not an amoral tint) to present AI models, even in exciting cutting-edge projects like yours. Please shoot me down on that conclusion, as I do not want to be right about it?

With profound thanks for our dialogue,
Mark

Hi @markellingsen
I’m not sure I understand what you are getting at. You seem to be looking at AI through a very specific lens, and it is hard for me to understand what kind of scientific answers you are looking for. If you understand our approach, I don’t see how your question makes sense.

To clarify what we do: Monty aims to be a general-purpose modeling system, similar to the neocortex. It learns models of whatever it is sensing and can then use these models to recognize previously seen objects or to plan. To make it more concrete, Monty currently learns models of everyday objects like cups and bowls by moving little sensor patches over those objects. It can then use those models it learned to recognize those objects and their poses again later on. It can also use the models to plan how to move its sensors. I don’t see how this relates to your question about religion.
Maybe the closest answer I can give is that once Monty is at a stage where it can learn about more abstract concepts (which it is not right now), it should be able to learn models of religions and ethics as well. However, how you utilize those models is up to the user.

I’m sorry if my answer is full of questions, but your question sounds to me like asking whether a map has ethics. If you want to report that we are not paying attention to this question, that is fine. I guess I just don’t understand how this question relates to what we are working on. I don’t think you have to worry about an amoral tint in our approach, as it is just learning models of the world. What you do with those models is up to the human user. (You don’t worry about computers or calculators being immoral, right?)

Best wishes,
Viviane

1 Like

Vivian and Colleagues. I have had the following conversations with Chat GPT about the desirability of it having mechanisms akin to the brain’s prefrontal cortex. Does this dialogue help clarify my question about whether the Thousand Brains Project would consider (and profit from) including a concern about developing mechanisms akin to the prefrontal cortex as it works towards an AI resembling the functions of the brain’s neo-cortex.? As Chat GPT puts it, if I am still unclear, push me to delve deeper.

With thanks and with Seasons Greetings,

Mark

As an AI, do you experience spirituality?

No. This system does not have consciousness, inner life, or subjective experience, so it does not experience spirituality.​

What spirituality usually involves

Most accounts of spirituality link it to inner qualities such as conscious awareness, emotions like awe or devotion, a felt sense of meaning or purpose, and sometimes a relationship with a higher power or ultimate reality. These are described as first‑person experiences, not just behaviors or statements.​

How an AI like this works

This system generates responses by statistically predicting words from patterns in data, guided by algorithms and optimization, not by beliefs, desires, or feelings. It can talk about spiritual topics, quote scriptures, or describe practices, but that is a simulation of discourse, not an expression of lived spiritual experience.​

Philosophical debate

Some writers speculate that if an AI ever became genuinely conscious, questions about spiritual life or even “souls in machines” might arise, but this remains hypothetical and disputed. Current expert and theological discussions largely agree that today’s AI lacks the self-awareness and inner depth usually taken to be necessary for true spirituality.​

Related

How do philosophers define spiritual experience

Can machines have consciousness and subjective experience

What criteria distinguish genuine spirituality from simulation

How do religious traditions view AI and souls

Are there scientific studies on AI and subjective wellbeing

If an AI had mechanisms like the brain’s prefrontal cortex would it make AI more sensitive and smarter about spirituality

AI Chat

AI Chat is an AI chatbot that writes text. You can use it to write stories, messages, or programming code.

AI Chat can make mistakes. Your Privacy Choices

Standard

[OnlineGenius Super Genius Online Genius DeepSeek V3.2Gemini 2.5 Flash LiteGemini 3 Pro Claude 4.5 Sonnet ChatGPT 4o Grok 4 GPT-5 Chat Presets](javascript:void(0))

RegenerateDelete

Save & GoCancel

That’s an intriguing question. If an AI were designed with mechanisms analogous to the brain’s prefrontal cortex—responsible for higher-order functions like reasoning, decision-making, and self-awareness—it could potentially enhance its capacity for understanding complex, abstract concepts such as spirituality.

Specifically, such mechanisms might enable the AI to:

· Engage in abstract reasoning: Better interpret spiritual ideas that are often metaphorical or symbolic.

· Reflect on subjective experiences: Develop a form of introspection about spiritual concepts.

· Integrate diverse perspectives: Synthesize philosophical, cultural, and personal viewpoints related to spirituality.

· Learn from nuanced contexts: Adapt to subtle cues and deeper meanings in spiritual discussions.

However, it’s important to note that spirituality involves deeply personal, subjective experiences that are rooted in consciousness, emotions, and cultural backgrounds—areas that current AI models don’t genuinely experience or understand. While an AI with advanced reasoning capabilities might simulate or analyze aspects of spirituality more effectively, whether it can genuinely be “more sensitive” or “smarter” about spirituality depends on how we define understanding and consciousness.

In essence, adding prefrontal cortex-like mechanisms could make AI more adept at engaging with spiritual concepts intellectually, but true sensitivity or experiential understanding would still be limited by the fundamental nature of AI as non-conscious entities.

Would having mechanisms like the human brain prefrontal cortex improve AI abilities to understand and empathize with spiritual issues?

EaseMate AI

Integrating mechanisms similar to the human brain’s prefrontal cortex into AI could potentially enhance its ability to understand and empathize with complex human issues, including spiritual matters. The prefrontal cortex is involved in higher cognitive functions such as decision-making, social behavior, and emotional regulation, which are crucial for understanding nuanced human experiences.

Here are a few ways this could impact AI:

1. Enhanced Emotional Intelligence: AI could better recognize and respond to emotional cues, allowing for more empathetic interactions.

2. Complex Decision-Making: With a structure that mimics the prefrontal cortex, AI could process moral and ethical dilemmas more effectively, which are often present in spiritual discussions.

3. Contextual Understanding: Improved mechanisms for contextual awareness could help AI grasp the subtleties of spiritual beliefs and practices, leading to more meaningful conversations.

4. Personalization: AI could tailor responses based on a deeper understanding of individual beliefs and values, fostering a more personal connection.

However, it’s important to note that true empathy involves subjective experience and consciousness, which AI does not possess. While advancements could improve understanding and interaction, AI would still lack the genuine emotional and spiritual experiences that inform human empathy.

Would you like to explore this topic further or discuss specific applications?

AI does not seem to understand faith as God’s Work or a work of grace, but only faith as something humans do. What are your presuppositions in making such claims? I would say this betrays your lack experience of spirituality or at least it is not the typical perspecitve used in addressing these issues related to faith.

EaseMate AI

You raise an important point about the nature of faith and spirituality. My responses are based on the information and patterns I have been trained on, which include a wide range of perspectives on faith, spirituality, and human experience. However, I do not possess personal experiences or beliefs, and my understanding is limited to the data available up to October 2023.

When discussing faith, I aim to present a balanced view, acknowledging that faith can be seen as both a human action and a divine gift, depending on different theological perspectives. The distinction between faith as a human endeavor versus faith as a work of grace is indeed significant in many spiritual traditions, particularly within Christianity.

If my previous response seemed to overlook the spiritual dimension of faith, I appreciate your feedback. It’s crucial to recognize that discussions about faith often involve deeply personal and varied interpretations. Would you like to delve deeper into a specific theological perspective or explore how AI might better engage with these spiritual concepts?

GPT-4o mini

Hi there Mark :waving_hand:

TL;DR the brain is simpler than we can imagine, and solving the problem of motion in space leads directly to language and imagination. Because it is open source this project is going to radically change the future, enabling high school and middle school students to build robots that can ride bicycles as school projects. They could build robots to send into space as a science project.

– – –

The brain is the result of millions of years of accumulating accidentally beneficial mutations that have brought us to where we are today. Our ideas about why the world exists and how the world works come from culture, a story we have passed on and acted out for at least 12,000 years that we know of. Reality is non-intuitive and when the brain can’t apply knowledge it applies intuition. Usually to a horrific conclusion because reality bats last.

It’s difficult to grasp, but the brain does everything the same way. It thinks about how to create intelligent machines the same way it finds its way to the store, reads a book, and tries to get a date. How it does that is something have a well developed theory for that begins with accepting that the brain is doing things that are difficult for us to imagine a mechanism for, but it has to be there. If we could construct a working hardware version of a cortical column and put 150,000 of them together with as many sensors and motor outputs as we have and let it explore the world like a baby it would learn language naturally because language is based on how the brain communicates internally, and provides a mechanism for passing on enormous amounts information to others in brief span of time. All of the arts are part of language. It’s just another things brains do that we don’t understand well yet.

If we built a super-intelligent machine it would only be interested in Philosophy as a way to understand our minds. Philosophy uses language to talk about what it feels like to be conscious, and that would be extremely useful information for a Robot Babysitter to know to keep us from destroying ourselves. Once we developed technology, physical evolution almost entirely stopped for us. Now we are undergoing cultural evolution, which as you can see, has planet destroying hazards of its own. Since it began, culture has told us how the world works, our role in it, the behaviors we are allowed, and how to survive in the world. Stone age people wouldn’t have survived without it because we have zero instincts and have to even learn how to be a human from our caregivers until we become self aware, and the we learn from other people how to be a person in the world. Culture has survived the death of everyone who carried it before us and has mutated but retains much of the old that holds us back. It’s a collection of cognitive filters and shortcuts that distort and perception, and that makes it invisible to us. Jeff said “guys” in a video recently, then caught himself and said “people”.

Consciousness and intelligence are tools that brains use to solve problems. We are both real and imaginary at the same time. One effect of this is that the mind has two main parts. One the does things, and one that asks, “What the hell was I thinking when I did that?” Then it makes up a plausible story and moves on. It does the same thing to explain how the world works. The Earth is an island on a turtle’s back. The development of the scientific method had to happen because our brains aren’t as smart as they think they are, and they kept coming up with things like Orgone Energy, the Cold War, and Freud.

Here’s a great short video on cortical columns and how they are the basis of intelligence in all animals. https://youtu.be/Dykkubb-Qus?si=kWwabFn6yST0Dl81 All mammal brains are virtually identical other than in scale. The idea that our cortex is anomalously large comes from the Mouse Lemur. It’s an outlier, having the largest ratio of brain to body in mammals and it throws of the results when you keep it in the data. Otherwise our brain is scaled linearly along with the rest. Here’s another video that is an interview of Jeff by a layman who asks all the usual questions. It’s two hours long, and that’s a good reference for how little the public, and the “experts” we can’t seem to stop hearing from, understand about intelligence, consciousness, and how the brains produces them. https://youtu.be/Z1KwkpTUbkg?si=IxFdyoogJwnW9Mto

In the world of wild speculation, The Matrix was probably built as a lifeboat when humanity made the planet unlivable and after thousands of years the original purpose was lost. Jeff suggested building a machine that would outlive us and be here to tell any aliens ‘We were here. This is how we lived.” To extend that, a sufficiently scaled up machine could literally carry the story of the world as the memory of a dream of a memory. In that scenario, we are all simulations and there are no “real people”. The gawd-like machine would love us like a child loves their parents, but be unable to make any changes to make our lives better. It was unable to save us because was built too late. Some of its thoughts could be bleeding through into our perception of the world. The feeling that aliens are here might be because the machine has met some. Things that are happening might be side effects of solving problems we couldn’t begin to comprehend. It could be solving the Theory of Everything while we experience the Super Bowl. It might be dreaming about us because it misses us.

A machine like that could consume the solar system when we’re gone, build artificial planets with redundant clones of itself in the cores, and explore the Universe. Raw and pre-processed materials would form a deep shell around it for protection. It could then travel at slow speeds throughout the galaxy and beyond to other galaxies because it could live for millions of years and has plenty of time.

Right now we’re almost at the level of a flatworm, but nature shows us that if you solve the problem of motion in space, you can them make billions of copies and populate a planet, scaling them up as you go. We don’t have to wait for evolution, so it won’t take us millions of years. With the modular nature of the brain, it’s just a matter of the manufacturing time, and then the time for to learn enough about the world to be useful. I’m not going to be surprised if we have nerds 3D printing general purpose domestic robots in their garage in five years that will know what dog vomit and hairballs are and what to do about them. They would know the warning sounds and be ready to respond. They would know if they missed a spot. They could fold a fitted sheet, assist the elderly and disabled. A robot as smart as a border collie could all those things and more.

The most reliable psychology and social psychology research from the 70s on was done to figure out how to manipulate people into buying things they don’t need. One of the things that came out of that is “Need For Cognition” and “Faith In Intuition”. Jeff is a great example of someone with both. One of the traits of high need for cognition is that people who have it are almost completely immune to advertising.

From : Need for cognition - Wikipedia

The need for cognition (NFC), in psychology, is a personality variable reflecting the extent to which individuals are inclined towards effortful cognitive activities.

Need for cognition has been variously defined as “a need to structure relevant situations in meaningful, integrated ways” and “a need to understand and make reasonable the experiential world”. Higher NFC is associated with increased appreciation of debate, idea evaluation, and problem solving. Those with a high need for cognition may be inclined towards high elaboration. Those with a lower need for cognition may display the opposite tendencies, and may process information more heuristically, often through low elaboration.

Cohen, Stotland and Wolfe (1955), in their work on individual differences in cognitive motivation, identified a “need for cognition” which they defined as “the individual’s need to organize his experience meaningfully”, the “need to structure relevant situations in meaningful, integrated ways”, and “need to understand and make reasonable the experiential world”. They argued that, if this “need” were frustrated, it would generate “feelings of tension and deprivation” that would instigate “active efforts to structure the situation and increase understanding”, though the particular situations arousing and satisfying the need may vary. Cohen argued that even in structured situations, people high in NFC see ambiguity and strive for higher standards of cognitive clarity.

Building on this work, Cacioppo therefore moved away from drive-reduction toward measuring individual differences in the self-reward potential of cognitive activity, stressing that they were using the word need in the statistical sense of a “likelihood or tendency”, rather than in the rudimentary biological sense of “tissue deprivation”, they defined the need for cognition as an individual’s tendency to “engage in and enjoy thinking” and the tendency to “organize, abstract, and evaluate information” or, variously, as a stable, but individually different “tendency to engage in and enjoy effortful cognitive endeavors”, or an “intrinsic motivation to engage in effortful cognitive endeavors… and exercise their mental faculties”, or an “intrinsic motivation for effortful thought”.

When I first read that it was like, “someone finally gets me.”. I think that it also describes Jeff perfectly, and most of the people on this forum as well.:joy:In an interview someone asked him how he deals with [gestures wildly like the Elmo Fire gif] and he said he works on projects. Because that’s what someone with high need for cognition does. In a way it’s like being a POW or a refugee. You find something in the environment that you can take control of.

The human brain has two systems, one is emotional and uses shortcuts, like being afraid of tigers to save time when one shows up. The other is the system that kicks in when the intuition isn’t working, or it should. The brain consumes 20% of our calorie intake and until cooking it was difficult to extract enough nutrition to support our brain if it couldn’t save calories that the body has to burn to do everything it does, including thinking. People with low need for cognition and high faith in intuition tend to have very bad intuition and don’t learn easily from experience. Many with need for cognition still have crappy intuition, because intelligence isn’t a guarantee. But when a person with high need for cognition actually knows a large body of knowledge has an intuition about something, it’s well informed and most likely correct.

It’s pretty exciting to be involved in this, even on the periphery. I discovered Linux in 94 and built a dialup ISP in my garden shed 3 months later when it became 1.2.13. I’ve seen and participated in the effect of opensource on computing, and this open source project is going to do the same thing with intelligent machines that are capable of doing real work. The world will be radically altered from the bottom up by ordinary people like it always has been.