Hi, Sujith from India!

Hi everyone! I’m Sujith, an AI engineer based in India, deeply curious about how intelligence works—both in the brain and in machines. My professional work has focused on building agentic capabilities into enterprise platforms, but my core drive lies in understanding the foundational principles of learning and reasoning.

I discovered the Thousand Brains Project through a conversation with ChatGPT, while exploring neuroscience-inspired AI and searching for people who shared a similar curiosity. It felt like I finally found a space where the questions I’ve been asking—especially around epistemology and cognition—were being seriously explored.

Philosophy, especially epistemology, plays a huge role in how I think about AI. I’m drawn to questions like: What does it mean to “know” something? Can machines truly understand? How do brains model the world so efficiently? These questions fuel my interest and make me especially excited to engage with this community.

Looking forward to learning, contributing, and connecting with all of you!

2 Likes

Hi Sujth, and welcome !

There have been many books written on the subject of the essence of knowledge and understanding although I don’t think there have been any satisfactory answers.

For my part I think it is perhaps not as magical as some might believe. For my research I consider to “know” something is to have a selection of models of that something in the brain, be they abstract concepts or concrete concepts. For example we “know” what money is both in the look, feel and sound of the coins and notes, but also the abstract concept of what it can be used for. Similarly we have models for shoes, ships, sealing wax, cabbages and kings (although many won’t have models for sealing wax any more). The complexity of the models we hold determines how well we “know” something or indeed “understand” something.

I think this extents to all creatures whether they are aware of it or not. A cat’s brain holds models of all the things it understands to the level at which it understands them. The cat’s models can be very sparse, it has some understanding of glass such that is does not try to push through it like an insect, possibly its superior vision allows it to see the glass.

Also models will reference other models in a hierarchical structure.

So if a machine can hold similar models to a biological brain and manipulate them in similar ways there would be no reason not to consider the machine to be “understanding”.

How the brain achieves this so efficiently is indeed an intriguing question which I believe motivates the people at TBP and many more of us. For my part I am trying to tackle it by practical experiments in how to build models from sensory input data from the ground up,
quite literally a model is needed for ‘the ground’, what it means to be on it, and at what incline it becomes a wall.

1 Like

curious about how intelligence works […] understanding the foundational principles of learning and reasoning […] What does it mean to “know” something?

I’m taking a unique approach to this (as far as I know). Have you heard of “PKMS” before? It’s short for “personal knowledge management system(s)” and I’ve been mixing that idea with the extended mind. I consider my wiki of linked markdown notes to be part of my extended mind.

The core problem with any PKMS is finding your notes. I consider a note “well integrated” if I can find it quickly. As a coupled system with my personal wiki, I consider myself to “know” something if I can quickly get the information out of my notes - if it’s well-integrated. If something is in my wiki but I can’t find it, I don’t “know” that thing.

I’m working on a personal project with PKM at its core. I wrap some markdown notes with the actor model, allowing them to have event-driven behavior instead of just static text. I consider my system to “know” something if that something is well-integrated into its behavior.

I don’t believe in “G” or AGI, there is always bias. If my network of linked notes are my externalized memory, my actors are my externalized agency. Instead of trying to identify a general thinking skill, I’m working on building out my network.

1 Like

Thank you for your thoughtful reply
I really enjoyed your framing of knowledge as nested models grounded in sensory experience

it got me thinking about what enables this kind of modeling in the first place. Even if we learn through experience, aren’t there innate cognitive structures that shape how we learn ?
like how we represent space, causality, or especially time?

In that light, I’m curious
What’s your view on whether there are innate representational primitives that shape model formation?

Especially when it comes to abstract but essential constructs like time
how do you think about modeling it?
Is time inferred purely from sensorimotor sequences, or is there a built-in structure that guides how brains and systems come to understand it?

I’d love to hear your thoughts on how rationalist ideas fit into this model-based framework and whether the balance between learned and innate structures plays a role in how we approach artificial intelligence as well.

Certainly there will be structures dedicated to particular tasks. If in nature these emerge spontaneously or are preconstructed I couldn’t say, but for sure I will need to architect these structures myself in order to short circuit the learning process which is still poorly understood.

I am hoping a suitably interconnected group of general purpose artificial neurons with a variety of activation functions will be sufficient to represent a limb, or a more general concept such as
the ground, or an object that can can be manipulated.

Take the limb for example, which is where I am starting. Live sensory data indicates how each of the servo joints is positioned, power data indicates how much effort is being expended to keep it there, and touch data indicates when the foot is on the ground. The model for what it feels like to walk, for this one limb, is the pattern of sensory data the limb is expected to generate as it moves through the walking motion over a period of time - the all important temporal aspect.

If the sensory pattern deviates from the anticipated pattern, if the limb hits an obstacle and the power spikes or the ground is felt sooner than expected or not at all (like when you think you’re on the last step of a staircase and you’re not) then other neurons will spike triggering response behaviour.

A higher level will coordinate all of the limbs into a walking motion using similar neural structures
and combining IMU sensory data. Here there will need to be a model representing the limb’s position relative to the entire body.

In answer to your question about modelling time, I don’t think there is anything special about it. I think our concept of time is determined by the speed at which our neurons operate (in the tens of milliseconds) and that imposes natural limits on our perception of the world.

For an emulation of a brain-like neural network running on a non-real time computer I will need to distribute a time sense signal to all neurons to simulate the passage of time similar to that experienced by a biological brain.

On your last point I think it is too early in the development of Artificial General Intelligence to make any proclamations about the correct way in which to implement it. I would like to create some real world behavioural capabilities (as opposed to humanoid shaped machines carefully configured to simulate human behaviour) and see what is learned on the journey.

Innate structures or behaviours will be required, not least to motivate the artificial creature to get up off the charging station and go explore its environment.