A Biological Attentional Framework to serve as a reference for TBT Models that are founded on the human cortical layout and attentional differentiation

The Potential Role of Hemispheric Attentional Differentiation in a TBT Model and Voting Process

I find both the order of the timing of the acquisition of lateralization (predating mammals ) as well as the recognition that lateralization is a fundamental administrative difference of available information, very significant. (And also agree it does not make more primitive intelligence impossible, if we have no lateralization in place).

Also the fact, that the loss of a hemisphere early in life (when a single hemisphere can still learn to do the jobs of two hemisheres) is better in cognitive terms than losing a hemisphere later in life, when the functions were already split across both hemispheres, supports the notion that two fundamentally complimentary forms of attention are needed for what we consider to be a healthy cognitive balance.

It must be noted, that hemispheric specialization is not essential for motor control which predates lateralization. Our left hemisphere on its own already masters the fine motoric control of a right hand in right-handed individuals and vice-versa in left-handed ones. So at an HTM-level of motor-sensory control, lateralization is not strictly needed, but for higher cognive “understanding” (disambiguation) of the object at the TBT-level of cognitive voting processes it may well be very fundamental. The left brain sees the elements of say an object or of a body of language (text) information. But it is the right brain that identifies the whole as a specific singular object, or in language it identifies the actual intention of the author of that language, rather than just seeing the literal meaning of that same text in the left-brain. It is only in the typical right brain that we see the singular identity of someone’s face, but in the left-brain we can consciously analyze the features of that face. The left is like Spock in Startrek and the right is like Captain Kirk. The left is reductionist and “conscious” of details. The right is holistic and conscious of specificity and of context. In my opinion this right side of our brains plays a fundamental role in the TBT voting process across the corpus collosum. Both hemispheres collaborate intensively, and as McGilchrist points out, the left hemisphere is barely aware of the right, but the right hemisphere is very aware of the left, because it integrates all the elements of output from the left. If we take this in conjunction with the functional differences of a PFC and the motor-sensory functions of our posterior brain, we can start to map out a more accurate layout of a TBT model.

The right brain may be more bottom-up cognitively and the left brain may be more top-down cognitively (in sequential terms of voting), but both have their control functions in the pre-frontal cortex and their sensory functions in the posterior sections of the brain.

Any good references for theories of the alternative computations and connectivity between the two hemispheres?

I am only a hobby neuroscientist, but I do read a lot of neuroscience with passion for the subject. (My actual professional field is Computer Science). Over the Summer I read a book by Iaian McGilchrist called “The Divided Brain” which made me aware of the attentional differences in the Left Hemisphere (LH) and the Right Hemisphere (RH). He is an authority on the subject of lateralization at Oxford, so his credentials can be trusted. I then started looking up more details on the subject and found some very interesting new sources, that explain the “Principal Gradient Theory”. The brain’s cortex is organized along a principal gradient that spans from sensorimotor regions to the heteromodal regions like the DMN (Default Mode Network). The LH tends to have higher gradients that are more closely aligned with semantic processing. the RH has stronger alignment with visual attentional networks (a holistic object perception). You should check out the following sources to find much more on this reality, that we at TBP have not yet been giving any importance to.

A tale of two gradients: differences between the left and right hemispheres predict semantic cognition | Brain Structure and Function

Frontiers | Individual Differences and Hemispheric Asymmetries for Language and Spatial Attention

The hemispheres are not equal: How the brain is not symmetrical - Max Planck Neuroscience

To understand the concept of “gradients” in this context of cognitive networks in the cortex, I have found the following with ChatGPT’s help:

:blue_square: Low Gradient Value Regions

These are unimodal sensory or motor regions, specialized for processing direct sensory input or motor output. They are functionally and structurally segregated.

Typical low-gradient areas:

  • Primary Visual Cortex (V1) — occipital pole

  • Primary Auditory Cortex — superior temporal gyrus (Heschl’s gyrus)

  • Primary Somatosensory Cortex (S1) — postcentral gyrus

  • Primary Motor Cortex (M1) — precentral gyrus

  • Early dorsal and ventral visual stream areas (e.g., V2, V3)

  • Premotor regions involved in basic motor planning

These regions are specialized, localized, and closely coupled with specific sensory modalities.


:red_square: High Gradient Value Regions

These are heteromodal association areas, especially within the Default Mode Network (DMN). They integrate information across modalities and are involved in high-level cognitive processes like semantics, self-referential thought, and episodic memory.

Typical high-gradient areas:

  • Medial Prefrontal Cortex (mPFC)

  • Posterior Cingulate Cortex (PCC) / Precuneus

  • Angular Gyrus

  • Lateral Anterior Temporal Lobe

  • Lateral Parietal Cortex (esp. inferior)

  • Temporoparietal Junction (TPJ)

  • Hippocampal Formation (especially in connectivity-based gradient work)

These regions are multimodal hubs, showing broad connectivity and late developmental maturation. They support abstract, conceptual, and introspective processes.


:yellow_square: Intermediate Gradient Regions

Some areas sit between unimodal and heteromodal regions, acting as transmodal hubs or task-control interfaces. These often belong to:

  • Frontoparietal Control Network (FPCN)
    (e.g., dorsolateral prefrontal cortex, intraparietal sulcus)

  • Dorsal Attention Network (DAN)
    (e.g., frontal eye fields, superior parietal lobule)

They are flexible, dynamically coupling with other networks depending on task demands.


https://www.cell.com/trends/cognitive-sciences/abstract/S1364-6613(10)00089-6?_returnURL=https%3A%2F%2Flinkinghub.elsevier.com%2Fretrieve%2Fpii%2FS1364661310000896%3Fshowall%3Dtrue

So if I summarize all this (at the risk of over-simplification), the higher gradients in Daniel S. Margulies work are what we would understand to be higher levels of the hierarchy in our HTM theory, which is still a foundational concept for the CC (cortical columns) in our brain cortex in TBT. If the RH (right hemisphere) contains networks of CCs that are processing higher levels of the cognitive hierarchy, this lateralization provides an additional lateral dimension for our TBT models (voting process for disambiguation). It was already clear that our PFC (pre-frontal cortex) has higher gradients (hierarchical levels) than the posterior / caudal regions of the brain. But if the RH is also in general higher in gradient (in hierarchy), then perhaps there is the same exact “voting process” for CC in both the RH and LH, but the level of representational hierarchy (via aggregation of inputs) that is taking place in the RH is at much higher levels (holistic representations i.e. SDRs) of objects in the observed world. So it may not be a difference in attentional algorithms in both hemispheres at play, but instead a higher (aggregated) semantical representational level of content at higher hierarchical levels in the RH. This would explain why more “holistic” recognition takes place in the RH. And also why token (atomic) level of language is processed in the LH while the over-arching meaning or recognition of intention or purpose in a particular body of text or spoken language is taking place in the RH.

Based on this, I would propose that the frontal regions of the cortex are more analytical and therefore (in general) higher in the cognitive hierarchy as in the HTM framework, while the posterior regions are lower in hierarchy and sensory-motor in their function. But the right hemisphere (RH) adds another dimension of hierarchy (gradient) in semantic terms. The RH processes the higher levels of hierarchical representations (SDRs), which we describe as more holistic and unique or singular, as in the face recognition of a very specific person. This explanation conserves the findings of the algorithmic universality of our neocortex in all its regions (offering high plasticity via repurposing regions) from Vernon Mountcastle while also explaining the observations of lateralization and the functional differences in the hemispheres.

Basically, I am proposing that cognitive hierarchy may have at least two main dimensions and is not just scalar. Both dimensions may have tree structures with branches, but topologically two directional gradients as shown above.

I requested a breakdown of the paths that lead to updating the cortical columns in the right hemisphere, in order to understand how the context-rich holistic right hemisphere gets it’s updated models. The answer provided by this LLM is very enlightening and thought provoking. For me this raises many new questions.

Following, I am inserting the sequences describing the pathways that update the right hemisphere in an example with visual sensory input. Any comments on this by Viviane Clay or others on how we incorporate this in TBP?

Process Description fot RH updating:

For learned objects in the right hemisphere, the encoding route depends on where in space you first perceive them and what kind of information they carry — but in all normal cases, the hippocampus plays a central binding role.

Here’s the breakdown:


  1. Direct sensory-to-right hemisphere encoding

Visual input from the left visual field projects to the right occipital cortex first via the optic chiasm and thalamus (lateral geniculate nucleus).

The right ventral visual stream (occipital → temporal) begins object form processing directly.

If the object’s recognition is primarily visual-spatial (faces, places, scenes), the right hippocampus binds the features with their context and indexes them in right neocortex.

This is the main route for nonverbal declarative encoding.


  1. Hippocampal-mediated encoding

Regardless of initial hemisphere, episodic binding happens in the hippocampus on the same side as the initial sensory processing.

For a left-field object → right hemisphere → right hippocampus → back to right ventral temporal cortex for consolidation.

For a right-field object → left hemisphere → left hippocampus, but if the memory has strong nonverbal components (e.g., the object’s unique shape, spatial context), the representation can be transferred to the right hemisphere before or after hippocampal binding.


  1. Cross-hemispheric transfer via the corpus callosum

When initial sensory input is in the right visual field, the left hemisphere gets first access.

If the object requires holistic or configural processing, the left hemisphere sends shape/form representations to the right fusiform / LOC via splenial fibers of the corpus callosum.

This cross-talk can occur before hippocampal encoding (so the right hippocampus stores it) or after hippocampal encoding (so the left hippocampus stores it but linked right-hemisphere neocortical areas still receive the representation).


:white_check_mark: In short:

If an object is first seen in the left visual field → mostly direct right hemisphere sensory input → right hippocampus → right neocortex.

If it’s first seen in the right visual field → left hemisphere sensory input → possible callosal transfer for configural processing → right hemisphere storage if strongly visual-spatial.

The hippocampus always participates, but whether the encoding is direct or callosally relayed depends on which visual hemifield captured the object first.