the youtube transcript system has a freudian slip which isn’t too far from the warning from reality:
in my amateur understanding there can’t be a strict physical global synchronization in the brain because like with the CAP theorem, it’s easy to get partitioned, and at any given moment, one can say that it’s in a state of partial connectivity (almost by modern understanding / definition). This is, however, the key to scalability, not a flaw.
the one quadrant (top right) is the evil one.
I haven’t watched the presentation video till the end yet, and perhaps my post turns into a useless commentary but I think, there might be something to think about: if one thinks of scaling Monty, one could think of one Monty Step handling a large number of (L)Ms whereas one could potentially turn it around by softening the synchronization constraint: by having many Monty Steps handling 1-to-few (L)Ms.
PiMs probably go in that direction, however, on another architectural level, the shared-nothing architecture might allow the desired scalability with whatever hardware/connectivity improvements one gets:
Since as mentioned, there’s a lot of data overlap/reuse, perhaps, not everything needs to be copied/communicated. This might be done sparsely/selectively.
Perhaps, university collaborations are perfect to try such things out.
At latest, real-time Monty systems will likely have to abandon rigid steps and turn into completely asynchronously communicating systems. That doesn’t exclude some kind of globally published signals, but without hard waiting/synchronization constraints.
In fact, in an actor system, each actor is uniquely addressable. The sparsity of connections is simply the list of process identities (addresses) of the actors in the system. And new addresses can be communicated via messages as well. “A and B wired together” can be mapped onto “A sends its identity to B or vice-versa”.
Upd thought 1: especially for the cycle in the inference loop, relaxing the temporal sequence of inference within a step might solve the conceptual loop by making each actor independent with opportunistic inputs and outputs. A bit like my async voting experiment (elixir_ne) + the newest post from today:
UPD 2: congrats on the cool thesis


