I know very little about actor implementations in general, but I have some understanding of BEAM-based systems (e.g., Elixir, Erlang). So, I’ll take a swing at this analogy, in case it helps…
Message Delivery and Dispatching
Any (lightweight) process running on a BEAM instance can send a message to any other process (or indeed, broadcast to a collection of processes). The receiving process may be just about anywhere, as long as its address (i.e., node name, PID) can be resolved and connectivity is available. For example, it could be running in:
- the same BEAM instance (i.e., Elixir runtime)
- another BEAM instance on the same processor
- another BEAM instance on another processor
That said, there are no guarantees about delivery, let alone timing. The system simply makes a “best effort” to deliver the message to the recipient’s incoming “mailbox”.
Complicating matters somewhat, the recipient isn’t required to accept messages in the order they were delivered. Instead, it can set up a series of dispatching patterns which control the order in which incoming messages are dispatched.
Roughly speaking, the BEAM uses the patterns for the recipient, in a highly optimized manner. It scans the recipient’s incoming mailbox, using the first dispatching pattern. If that fails, it falls back to the second pattern, etc. If and when a match is found, the designated function is called to handle the message contents.
FWIW, here is a link to a ChatGPT summary.
Caveat: If the number of incoming messages exceeds the recipients’ ability to process them, their mailboxes will grow in size. And, if there is no matching pattern, a message could remain in a mailbox forever. All of this can lead to memory and/or processing issues.
Sub-Module Addressing, Dispatching, Filtering, etc.
Diagrams of cortical columns show several levels and sometimes a lot of internal structure (e.g., many different types of neurons). So, using a single address for (say) a Learning Module could get in the way of efficient dispatching, as well as complicating the LM’s design.
In an Elixir implementation, this might be handled in various ways. Typically, messages would contain one or more symbols (e.g., :L1) which the modules’ dispatching patterns could use for matching.
Alternatively, because BEAM processes are pretty cheap, the LM could be divided into pieces and a “front end” process could dispatch, edit, and/or filter (i.e., modulate?) the incoming messages as needed.
Finally, note that there is nothing keeping a process from sending messages to itself. So, it might handle :L1_foo messages by sending related messages with other symbols (e.g., :L2_bar).
Note: I have no clue about how a Python implementation would handle this sort of thing. Comments and clues welcome…
Connectivity, Attention, etc.
Let’s compare neural and BEAM-based connectivity and attention a bit (corrections welcome!) …
-
Neural connectivity is limited by “wiring” constraints (i.e., axons, dendrites, synapses). In contrast, the BEAM allows any process to send messages to any other process.
-
In neural connectivity, the interpretation of a “message” is defined by the wiring of the sending and/or receiving neuron(s). The BEAM’s message dispatching is based on addresses and internal message cues.
-
Neural message timing is controlled by physical issues; a BEAM-based emulation would have to create its own timing support.
-
Neural attention is controlled by the nature of the message, the state of the receiving neuron(s), the synapses involved, etc. A BEAM-based Monty implementation would have to emulate all of this using addresses, message cues, and the state of the receiving process(es).
Although LMs could be told to “pay attention” to specified modules and/or sensor regions, this could force the BEAM to carry a lot of unwanted messages. A better approach might use something like the publish–subscribe (aka PubSub) pattern.