@Rich_Morin surely, demos with livebook would be a much fancier and maybe readable option. Only, I think, there are currently no free livebook hosting options. Although, a github repo should do with proper instructions. Good idea for further demos. I only used ideone for quick iteration and as a quick intro for anyone looking into Elixir, in TBP context.
I’m a bit torn on the various aspects we 3, as in @naramore, @Rich_Morin and myself are tackling. All of these are definitely of value for a future implementation in TBP, or alongside it, based on the same principles. There are lots of sub-problems to be solved, and maybe lots of proofs of concept to be written on each of the sub-aspect.
Perhaps, we could move towards the RFC idea, with each RFC being a solution to a specific and valid problem within the TBP context.
In the context pf TBP these seem to be, e.g.
- In Python, concurrency and parallelism are late add-ons and thus don’t provide the robustness and horizontal and vertical scalability of modules communicating via the Cortical Messaging Protocol → use lightweight BEAM(Elixir/Erlang) processes as units of concurrency, parallelism
- In Python, concurrency and distribution are two separate concepts that require different approaches, which adds to implementation complexity → use OTP (Erlang/Elixir) clustering coupled to unify messaging between processes on one machine and in a cluster
- Too many infrastructural dependencies may be required to run a distributed TBP system with scheduling, persistence and messaging → use OTP(Erlang/Elixir) built-in mechanisms for scheduling, distribution, storage, robustness and messaging instead of re-implementing Erlang with dependencies
now, here comes the potentially tricky part. The more I read the current implementation, the more I see that without a dedicated team (or an individual with free time), the switch might not quite fit into the TBP tight roadmap. E.g. recently @brainwaves mentioned that scaling-out is not a current priority. Thus, a couple of hybrid options for RFCs:
- switching to Elixir will sink too much time of the small TBP team into technology, instead of basic research → incrementally introduce isolated concurrent processes as Actors in Monty in Python, creating a blueprint for a future concurrency-native solution, e.g. in Elixir.
- switching to Elixir to test physical parallelism of e.g. robots or sensors or off-loaded learning modules would require too much rewriting up-front → treat concurrent and distributed parts of Monty as Actors, and let them communicate with each other using asynchronous messages transparently (not hiding of messaging via language-level RPC)
- some Python modules or dependencies will not be easily portable → leave the implementation option open by using the Actor Model, e.g. connecting via a common protocol (either Erlang Clustering via e.g. pyrlang) or something more neutral, e.g. brokerless zeromq.
All of these can definitely be refined and shortened into an impactful RFC title. I’m sorry, I won’t be able to do much in the coming days. Feel free to take on.
As for the method, the inspiration is Simplicity-Oriented Design by late Pieter Hintjens. One of the things I have learned to do in situations as these is: to delay concrete decisions “till the last responsible moment”. Priorities might shift, and when they do, our time & effort investment should not feel like loss but rather learnings gained.