Monty and Graphs

If I say that @hlee is a TBP researcher, how does 3D Euclidean space inform that statement?

I think this goes back to discussing about learning Abstract Concept in Monty, right? Here, the concept of “Hojae is a TBP researcher” doesn’t really fit into a physical space such as 3D Euclidean, as you mentioned. Currently we focus on Monty operating on 3D Euclidean space to do things like object recognition. While harder to imagine, I think Monty can learn in an abstract space (eventually) like space of “relationships”, for the “Hojae is a TBP researcher” example:


I think the above can be a graph that a Learning Module stores in its memory, and Monty could do “inference” like answering the question of: “how many total dogs do TBP researchers have?”

FWIW, I’ve thought a bit about what “general” space Monty can operate in (though I didn’t pursue it very far at that time). The gist of the thought was: what properties might Monty need? Can it be a topological space with just concept of “closeness”? Does it require a distance metric that satisfy positivity, symmetry, and triangle inequality? Does it need angles? Does the space need to be flat everywhere, or can it be a curved space with local flatness like Riemannian manifold? (I don’t have answers to these :sweat_smile:)

I think the basic problem is that, while the displacements (etc) are explicit, the edges are implicit. So, although the Numenta logo has a certain displacement from the “origin” of the Numenta cup, this isn’t really an “edge” in terms of graph theory, conceptual graphs, etc.

Yeah… though I don’t think this means we can never add edges for physical objects? For example, we could add an edge with attributes like timestep or movement action? (e.g. “moved up at a speed of X or force Y”) I could imagine how storing some speed information and letting Monty do a random walk could be used to infer the size of an object (e.g. let’s say I want to distinguish a big cup from small cup that is identical in features and morphologies - but if I happen to return to the same spot in less timesteps after randomly for one object than the other, then I might think this is the smaller object…) Or maybe there could be a self-loop edge if I tried to move in a direction wtih force Y, but that ended up in the same location…this could tell me something about the material of object (trying to “push” an object with Y newtons of force didn’t deform object). I’m completely spitballing here though…

What I’d like to have is a way to tie Monty’s internal models to meanings that make sense to humans, LLMs, etc. I think that this might be accomplished by adding some tags and such, but this is venturing far outside of neurological approaches.

I’m curious about your thoughts on the tagging approach. How would you add tags? Does it need to be exhaustive? Storing additional information as tags for objects (let’s say Monty learned fork, cup, and bowl) with “found from kitchen” is nice, but what about “can be used for pouring water”? This piece of attribute / tag would be very useful when Monty is trying to solve a larger goal of “put out fire”, but then there could be infinite attributes per object, and highly context dependent, right?

(Also, this feels somewhat related to object representation, and there was some discussion about that here: 2024/08 - Encoding Object Similarity in SDRs).

In any case, the Good News is that I now understand what’s going on. The Bad News, IMHO, is that this (mis)use of the word “graph” is needlessly confusing. Defining a new term (e.g., Learning Module) is fine, using an existing term in a new, unexplained, and confusing manner is not.

FWIW, I think terms like “object model” and “point cloud” have far less chance of confusing folks, down the road…

If you are up for it, I think this would make an excellent RFC with a great motivation section! :smiley:

2 Likes