I just watched an interesting and thought-provoking video on the state of the art in AI-based assistive technology. In it, a blind British comedian (Chris McCausland) takes a tour of some US tech centers in the SF Bay Area and Boston (e.g., Meta, Waymo, and an MIT lab on nanotech). He tries out various devices and talks about his reactions.
The show doesn’t get into nitty-gritty details, but it does give a high-level idea of current and possible deveopments. If you want to go off and watch it, feel free. (I’ll wait here…)
Chris McCausland: Seeing into the Future
Exploring Tech That Transforms Lives
Ah, you’re back. Let’s talk about a Partly-Baked Idea (PBI) regarding Monty and assistive tech. Basically, the notion is that a Monty instance could “ride along” with assistive tech", including sensing glasses, autonomous cars, etc.
-
Sensing glasses
Meta’s Ray-Ban Display glasses contain a variety of tech, including AI, cameras, microphones, speakers, and video displays.
-
Autonomous cars
Waymo’s autonomous taxis (currently deployed in SF) contain even more tech, including a few dozen cameras and several LiDAR units.
It seems quite plausible that all or part of the collected data could be copied to a Monty instance, either in real time or after the fact. Question is, could Monty make use of it for training and/or practical use?
My take is that it could, and should. Although the TBT is ostensibly based on sensorimotor feedback loops, where the brain tells the body what to examine next, the actual situation is a bit more nuanced. Consider:
-
While reading a web page or watching a video, you are ingesting and navigating immutable content. (You can decide which parts of the content deserve your attention, but the content itself is frozen in time.)
-
As you move your eyes around the screen, and perhaps move to a different section of the material, you are actively working within a feedback loop. However, this is only true for some value of “you”: Large parts of your brain are simply going along for the ride, waiting for some recognizable objects to appear. This is a bit like being a passenger in a vehicle.
-
Similarly, when you are an audience member at a live demonstration, your primary control has to do with what you pay attention to. A lot of animals (e.g., birds, mammals, octopi) can learn by observation; Monty should also be able to do this…
Getting a bit more concrete (or asphalt :-), I wonder whether the Waymo folks would be willing to share some recorded data or even give Monty instances live feeds from some vehicles. The Meta folks might also be interested in pairing their glasses with TBT tech.
(ducks)