How could Monty speak? (Neurosymbolic Syntax)

Hi @Judah_Meek , just to add to the information that Will has shared, I would highlight again the existing version of unsupervised learning in Monty that Will linked. During unsupervised learning, if Monty perceives an object that it believes is similar to something it already knows, it will combine this information, forming a model that integrates these representations. If it feels the object it is learning about is something very different, it will learn an entirely new model.

Right now, unsupervised learning is very basic in Monty, as we have been prioritizing compositionality and object behaviors. However, there is already a new type of LM (constrained object models), which should make the above unsupervised learning work even better as a way of “clustering” similar objects in the world. You might therefore be interested in checking the Future Work item here, which basically consist of testing this already implemented method out.

As a final comment, our general view is that representations and concepts emerge early in life prior to any form of language. Language later becomes a way of referencing these representations, but they are fundamentally learned in a sensorimotor way. For more on this, you might find this thread interesting.

3 Likes