Hi There,
My name is Ram - I’m doing my PhD on neuromorphic computing and AI at the University of Groningen in the Netherlands.
I’ve experimented with using Monty for learning from tactile data. For example there is a public Braille letter tactile dataset published online here.
I’m working with internally collected lab tactile pressure readings from a tactile sensor on fabrics. I was using this for a classification task using some neuromorphic paradigms. It worked well with SNNTorch but not with others (Nengo and reservoir computing).
I tried using Monty but once setting up the environment to read the tactile data and setting some positioning for the finger according to the data it’s very unclear how learning would actually work for a tactile rather than a vision task.
In my case I have a single tactile pressure sensor with voltage readings for 19 classes of data each of which have roughly 20 samples each. I can build graphs for each of the samples and do a supervised training run after which I experiment on test data, but is the learning algorithm basically to match voltage + position information graphs with other graphs?
I’m not entirely sure that will work well or if that is the only available learning mechanism as pressure readings can be very similar to one another. And I’m also not clear what the benefit of Monty is in this case as you could just directly build graphs of voltage + positional information without going through the monty framework and do a clustering analysis. It seems like a machine learning without nonlinearity which wouldn’t work for a complex 200,000Hz tactile data classification task (what I’m dealing with).
Would be great to get some feedback/thoughts on if my thoughts on how to use learning modules for this lines up with what is expected and if there are any alternate suggestions!
Best,
Ram
1 Like
Hi Ram, welcome to the forums, and thanks for working on this project with the TBP, sounds super interesting!
From your general description, this could be a great test-bed for Monty, and so should in principle work. To start things off, it would be helpful to clarify a few points:
- How do you derive the movement / location information in the dataset, i.e. where the tactile sensor was for each pressure reading?
- You mentioned building graphs, do you mean the models that are built in Monty? Have you been able to visualize those, and how they look for some of the Braille letters? That would be interesting to see if so.
- It would also be interesting to better understand what the pressure data is like, e.g. is it something like a grid of depth values (2D array)? Or is it more like a one-dimensional input? Learning modules should operate the exact same for tactile data as they would for vision data - the key difference is any processing that is done in a Sensor Module class. This class should process the input into a “Cortical Messaging Protocol” output - this should contain information about where the feature is being sensed, as well as a sensed “pose” of the feature, e.g. the direction of a surface normal and an edge. That’s a lot of terms I just threw out, but hopefully the documentation on our website is clear - otherwise I’m happy to elaborate.
To address some of your questions/comments
- I would agree that 200,000Hz sampling is much more than Monty needs. Is it possible to sub-sample snapshots from this data? Something like 5Hz would probably be more than enough.
- Re. Monty, it’s important to note that it will detect the global arrangement of the different pressure readings, not just the local pressure readings. In other words, it is not a bag-of-features model. Maybe there is a simpler approach for your use-case (like the graph clustering you mentioned) but to give some examples of how Monty could be helpful i) Monty can perform inference after sampling only a handful of points in the input. In other words, it could potentially recognize a Braille letter before the entire thing has been felt, or if part of it has not been felt due to noise (e.g. poor contact with the sensor) ii) Monty can direct where to move so as to efficiently recognize the letter; rather than moving randomly over the object, or needing to do a raster scan over the entire letter, Monty could move in a principled way to parts of a letter that quickly disambiguates it from other, similar letters. iii) Long-term (not currently possible) a hierarchical Monty system that has knowledge of words could target how it samples individual letters, based on expectations about what word is being read, etc.
Hope that helps.
1 Like