Hi all,
I’ve been observing Numenta since “On Intelligence” for years, and have only slowly immersed into the theory and NuPic.
One idea I find could add to the success of software modelling a highly distributed and concurrent biological system would be a software system that has these characteristics.
I remember seeing the discussions on how to scale/distribute NuPic and thinking that this should not be done as an afterthought.
Coincidentally there’s one (and probably the only one) run-time that has been designed to support highly concurrent and distributed software systems: the BEAM (supporting e.g. Erlang, Elixir as languages and OTP as its base library). Since sending and receiving messages is part of the run-time, and the scheduler does a good job at a reasonably fair near real-time scheduling of the processes (actors), this could be a perfect fit for the Thousand Brains Project. Depending on the hardware, millions of processes running concurrently are thinkable. If linear or hardware-accelerated performance is needed, one can switch to processing in native code, and the rest on the BEAM, which also has a JIT that e.g. improves string handling significantly.
Another aspect beyond distribution and concurrency is the temporal one, e.g. interrupting unnecessary computations. This is safe in languages running on the BEAM due to the share (almost) nothing architecture. A scenario I have been playing in my mind with is: consider some form of (column?) voting in real-time: if a deadline or a different signal to stop voting and pass on the result is needed. The architecture supports this really well.
I haven’t implemented any of the actual voting or prediction mechanisms but simulated the voting here: GitHub - d-led/elixir_ne: a neural voting experiment
P.S. I have no business affiliation with Erlang but it’s the other one of my interest focus’.
P.P.S. the tech is in some way a “secret and effective weapon”: Why WhatsApp Only Needs 50 Engineers for Its 900M Users.