I have several devices which I am anxious to interface to Monty. To do so requires Monty to look outward, not just to the Habitat simulator.
I’ve examined sensors.py & actions.py & actuator.py & states.py & …, and have determined that the changes required to make Monty interact with the real world are too extensive, & too structural for this 80 year old brain to have fun doing. In any event the changes I would have to make would unlikely match the intentions of the Monty workers, and so be accepted, and serve the community.
So I will await the arrival of the robot hackathon (or anyone’s developments in this area), which will certainly require this ability. Also useful in this regard, wouldst be a Distributed CMP.
In the meantime the pictured menagerie (cute little red, bug-eyed, binocular webcams on macOS to the left, RaspberryPi self-driving vehicle with a lidar on its roof, and an Intel RealSense RGB depth camera on macOS on the right, all viewing my ugly mug (coffee that is)) offer the following horrible puns:
Welcome to the sensorium
Eager to share our sense-abilities
We want to get our (coffee) mug shots added to the fray
Lets get this circus flying (given the project’s name & programming language)
Great puns! Looks like some fun projects awaiting Monty!
Sorry it is still a bit involved to get Monty to run in new applications. It will hopefully be a bit more intuitive to understand once we publish more materials and examples with the (team internal) robot hackathon.
In the meantime, just a few thoughts/hints:
The intended interface between Monty and any hardware would the EnvironmentDataLoader and EnvironmentDataset classes. Monty is designed with the idea that any of the general components (defined as abstract classes in the code) can be customized, and so you can easily plug and play with the different components (as described here in this video and in the docs)
We have several custom environments implemented already, which are not Habitat. You can see a list of them at the bottom of the documentation page on Environment and agent.
The vast majority of our experiments currently happen in simulation, but we did test Monty on real-world data during a previous hackathon. There, we streamed data from an iPad camera to the laptop. Then, Monty could virtually move a small sensor patch over the full image. See the demo video here: Project Showcase Code is also available and there is a set of MontyMeetsWorld benchmark experiments that we run frequently.
Generally, for a robotics project, I would currently recommend a similar approach as we took for this little showcase project:
Stream the data from your sensors to your computer, where Monty runs.
Feed the incoming sensor data to Monty. It will first go into sensor modules, which convert the raw data into the CMP, then to learning modules, which model the data, and finally, the motor system can translate CMP-compliant goal states into motor commands specific to your robot’s actuators.
Stream the motor commands back to your robot.
Depending on the application, you may have to implement a custom sensor module and/or motor system in addition to the custom EnvironmentDataLoader. We hope to publish more concrete instructions and guidelines on what applications are possible and which ones are currently not. I hope this helps a little already. Hang in there, we are doing what we can to lower the barriers!
This morning I entered a fairly dark room and noticed the round shape of a coffee mug on the desk. When I went to pick it up, it was in fact a curved scrap of paper. A slight Bayesian adjustment got wired into my neurons.
I’ve been doing slight adjustments for many decades, and obviously the job isn’t yet complete. We may be a bit impatient with this type of A.I. learning. Also, there are probably 2nd and higher order cybernetics happening, both in natural and artificial systems. Which is why I find error analysis so compelling.
I am just an observer, but I have a thought that may be useful. I suspect the first real world application of this technology will be in large existing machines. Can your software be integrated with the computer of a self-driving auto. They have a lot of sensory input, decision making, and actions.
I am just an observer, but I have a thought that may be useful.
Welcome to the club. FWIW, I’ve been a fly on the wall of AI for several decades, starting with a visit to the Stanford AI Lab in 1970. (And I also post possibly useful thoughts.)
Although I’m willing to be watched over by machines of Loving Grace, I’m not ready to cede control of my vehicle’s actions to an early (e.g., pre-1.5) code release. That said…
Even “standard” modern vehicles have lots of sensors. Most of these are available via the CAN bus. However, the details may be undocumented and/or proprietary, so YMMV.
There are also many other sensors that a self-driving vehicle might have. Some of these (e.g., vision, lidar) are high bandwidth and require Ethernet, etc. Also, the same issues of documentation and such apply.
Finally, any interested hobbyist can add their own sensors. A RasPi can gather quite a lot of its own data without even breaking a sweat. Using Bluetooth, Ethernet, USB, and/or Wi-Fi, it could handle (monitor and control) a wide variety of devices.
The issue of control is important because the TBT is largely focused on sensorimotor learning; passive observation, not so much…
Hi @shedlesky and welcome to our forum!
Self-driving cars will be a prime application for Monty in the future, as, like you mentioned, it is a sensorimotor application with multiple sensor modalities that need to be integrated to perform informed actions, which is exactly what Monty is made for. However, I have to agree with @Rich_Morin that currently, Monty is in too early of a research state to do this. There are several core capabilities on our roadmap that we still need to implement (modeling compositional objects, object behaviors, and a dynamic world) before we can look into such a complex application. But once those are implemented it will be exciting to test our approach on this application!
-Viviane