These are 2 sides of the same coin; better robotics enabled by generalization capabilities toward novel scenarios. When I say general intelligence, I’m talking about ARC-AGI’s definition, which Hawkins agrees with:
AGI is a system that can efficiently acquire new skills outside of its training data.
Granted, I was a bit ambiguous between physical capabilities and cognitive capabilities in my previous reply. I was more focusing on the latter than the former, e.g. running a Monty-powered conversational model on a local device.
Yes, I agree it will require some sort of shepherding as it gets more advanced. I don’t know how much of that is part of Hawkins’ vision and objectives, but ideally, there will have to be a workgroup to help steer the technology’s alignment.
This workgroup could perhaps even distribute a range pre-trained models, in a similar fashion to DeepSeek and Qwen. I know there will be people who will pry guardrails off, just like abliterated LLMs, but that’s the unavoidable tradeoff of open-source.
I still think open-source is a better approach to AI than Big Tech and their “trust me bro” attitude. Legislation wouldn’t help with AI alignment either, because lawmakers don’t have that much leverage on what tech companies do behind closed doors.
You can’t prevent bad actors from exploiting your code, but with Hanlon’s razor in mind, how would you “steer” usage of an open-source project? Well, I have some relevant experience to share…
In 2013, I co-founded the A3Wasteland project, a video game mod for Arma 3. At that time, a lot of Arma modders were starting to close their source code by turning their mods into SaaS-like platforms, because they disliked that people made derivative works of their creations. (The game’s file format made it very easy to extract anyone’s source code.)
I thought that was really dumb, so I went full open source with AGPLv3, the strongest copyleft license there is, requiring providers to publish their code, server stuff and all. I even deliberately intertwined client and server code in a way that made it very difficult to SaaS-ify.
At its peak, there were over a hundred communities running my mod on their servers, with thousands of players. I had no direct control over these communities, I just told everyone to fork my code and customize it how they wanted.
As time progressed, communities altered my code more and more, sometimes in ways that became detrimental to the broader playerbase. The situation was getting a bit ridiculous for the players, who were my number 1 priority at all times.
I had to do something, but had no control on how communities used my code… Most open-source projects rally around a main repo, but in my case, the main repo was just the seed, with a hundred forks as its roots.
Therefore, I tried a roundabout way to nudge things in the right direction. I was releasing content updates semi-regularly, which were often quickly adopted and deployed by the communities, as they competed with each other to have the latest and best features. I started using these content updates to introduce balance changes and try to undo some of the damage.
I was in active discussion with prominent communities about why I was introducing those changes and how they would benefit everyone. This approach was relatively well-received, most accepted the changes, and it resolved some of the gameplay issues. A minority didn’t like it and refused, but that was inevitable.
So, it proved to me that there is a viable middle ground for open-source projects to guide a decentralized ecosystem of end users in a certain direction, without giving in to authoritarianism’s siren song.
I foresee the project might evolve into something similar, with tons of variants around the world. Not necessarily of the codebase, but rather of the models. Can this sort of steering approach work at a global scale? I don’t know. All I know is that it’s the only principled one in my book. It won’t stop all bad actors, but it would certainly foster good ones.
It’s a little early, but we’ll figure this out.