Hi @brainwaves, and thank you for your reply and for sharing the related thread. I found the discussion on generalizing TBT models very insightful, especially the perspectives on extending it beyond traditional sensory modalities like vision or touch.
That said, I am still curious about how well my idea aligns with Monty’s current capabilities in practice. In particular, I am wondering whether there is any precedent or known limitation around using Monty in a virtual sensory space. My idea is to map features from a time-series dataset to two-dimensional spatial coordinates, such as sensor positions in a SCADA layout, and allow Monty to “navigate” this space to learn both temporal and spatial relationships. This would be done even though the dataset is historical and not interactive.
The overall goal is to apply Monty’s sensorimotor framework to build internal representations of system behavior and to detect anomalies in an industrial process. In essence, I am exploring whether TBT can be used in a simulated context where the structure of the sensor space is known.
If anyone has attempted something similar, perhaps using non-visual data or simulating feature-wise exploration, I would really appreciate hearing about your experience. I would also be interested in any reasons this might not work well, if there are specific limitations I should be aware of.
Best regards,
Martin