Hi Ely,
I’m happy to hear that you went through the tutorials and now want to test with some custom data! We want to write some more detailed instructions for this, including a new tutorial and a monty-for-robotics-starter-kit, but until I get around to this, here are some notes:
- I linked a few resources on customizing the EnvironmentDataLoader and EnvironmentDataSet in this post I'd hoped to be able to end the non-sense - #2 by vclay
- You don’t necessarily need to use an existing PyTorch dataset (in fact, the terminology is confusing, you should not use a static dataset in general since this is a sensorimotor learning approach). You will need to customize the EnvironmentDataLoader and/or EnvironmentDataSet (which customize the pytorch versions of them) to specify how your data should be loaded and how actions will determine the next observation.
- For an example, I would recommend having a look at this file: tbp.monty/src/tbp/monty/frameworks/environments/two_d_data.py at main · thousandbrainsproject/tbp.monty · GitHub. Here, we specify a custom
OmniglotEnvironment
which takes the omniglot dataset and allows a small patch to move over the handwritten symbols, following the strokes, and return those observations to Monty. There is also a custom environmentSaccadeOnImageEnvironment
where we take an RGBD image and move a small patch over that image. The corresponding DataLoaders can be found in this script tbp.monty/src/tbp/monty/frameworks/environments/embodied_data.py at main · thousandbrainsproject/tbp.monty · GitHub
I hope this helps! Let me know if you have more questions