Sorry for coming in late (I only recently skimmed the thread :-). In any case, congratulations on your progress in defining the necessary components, approach, etc. Here are some random notions for your amusement and consideration…
More cameras
A few years ago I bought some small (~1.5" dia.), battery-powered, Wi-Fi connected cameras. IIRC, they only cost $50 each; they may be available even more cheaply now. It might be useful to mount something like these on the robot arm itself, focused on the gripping region.
For example, this “Wi-Fi body camera” seems promising, at about $30:
Alternatively, something like a borescope / endoscope camera could let you mount sensors on the gripper(s) themselves. For example, this one is about $20:
Discussion
Using multiple, simple cameras could let you explore various (e.g., non-mammalian) sensing and evaluation scenarios. Connectivity and/or power could be provided via Bluetooth, USB, Wi-Fi, etc.
Note: It may be necessary to jump through some hoops to find out how to interact with the camera(s) from your “base” computer (e.g., cell phone, laptop, tablet). I’d recommend doing some reasearch and/or making some picky inquiries before buying anything.
Turntable
An inexpensive turntable (i.e., lazy susan) would give the setup another degree of freedom (letting Monty view objects from many more angles, under precise control). This would let you explore a variety of sensorimotor-based activities.
This approach is commonly found in 3D scanners, using a motorized turntable and surrounding cameras. In your use case, the robotic arm could spin the turntable itself, reposition objects on the platform, etc. If need be, cogs on the table’s periphery could make things easier for the gripper.
Turtles (all the way down)
There are various ways to specify the actions and motions of robotic arms, etc. (The nice thing about standards is that there are so many to choose from. :-) I’d probably start with a textual (e.g., JSON-based) representation of a simple command set. Python’s turtle module might be an interesting starting point for an implementaton.
Use Case?
From time to time, I’ve speculated about crafting an intelligent robot for gardening, This might use a lightweight mobile platform (with motorized wheels), a robot arm and cameras, etc. It could be trained to detect, recognize, and report assorted plants. At some point, it might even be allowed to tend (e.g., weed) the garden.
As a small step in this direction, how about maintaining a few potted plants, letting the robot monitor their growth and emerging characteristics? Over the course of your project, you’d have time to observe and interact with an entire growth cycle.