Monty and Graphs

Note: Thanks to @brainwaves for pointing out (in a PM) that the connection between @vclay’s remarks and this response was unclear. I’ve attempted to fix this below…

@vclay says “As I view it, thinking is also a type of movement, but it only happens in mental space.

I agree with this, but I’d like to get a bit more specific. Although it may not seem intuitively obvious, I think that graph navigation (i.e., movement along edges) and processing (e.g., creation, analysis) could be key to the processing of abstract concepts in Monty.

Although the comments below wander about a bit, they are intended to show how directed graphs could be used to promote Monty’s “movement through mental space”.

Background

As some folks here may recall, I’m a big fan of directed graphs as data structures, primarily because they are so incredibly versatile. I also like databases and LLM interfaces which can use graphs, e.g.:

BTW, anyone who is considering storing and using data values which change over time (e.g., in Monty) really needs to watch Rich Hickey’s Datomic talk. As the author of Clojure, Datomic, and a number of seminal presentations, he is (IMHO) several of the smartest people in the computing field.

Discussion

The current notion of Monty’s data graph seems rather ill-defined to me. Basically, each Learning Module (LM) is charged with retaining some state, based on received (e.g., CMP) messages). However, some niggling questions arise, e.g.:

  • What state should each LM retain?
  • How long should the state be retained?
  • What should happen to the “updated” state?
  • How can we assess the state’s “meaning”?
  • How can we access and/or process LM states?

Although I don’t have answers to these questions, I do have some ideas I’d like to promote. Framed as a wish list, we should be able to:

  • get snapshots of an LM’s retained states
  • save time-stamped snapshots in a data store
  • add tags (etc) to imbue the data with meaning
  • seed LM’s with “instinctual” data and tags
  • seed Monty with desired modules and connections
  • use data and tags to understand Monty’s behavior
  • and a pony…

Here is one possible path to some of this; comments welcome…

  • Enable each LM to report on its state (e.g., via GraphQL).
  • Set up a multi-model database to store historical snapshots.
  • Set up an immutable database to retain time-ordered values.
  • Use LLMs (etc) to analyze both DBs and generate “seeds”.
  • Rinse, repeat…

Incidentally, Monty’s message-based structure should play nicely with an immutable, time-ordered value store. After all, each message is just a transaction…

(ducks)

P.S. Datomic is closed source, but it has a free (as in beer) license. And, by using Datomex, it could be accessed from Elixir. FWIW, there are also some open source alternatives and a full Elixir version might be possible. Here’s a ChatGPT link for some details…

Hi @Rich_Morin
thanks for clarifying your post and the connection to the previous messages. However, I am still having a hard time connecting what you are saying to how Monty works. Monty does not use directed graphs to represent objects. It represents objects as features at locations (think more of a point cloud). Or do you mean the connectivity between learning modules?
Maybe have a look at our white paper. In Figure 19, we describe the inputs and outputs to a learning module. Also, figure 5 might be useful in illustrating the kind of information that is passed between learning modules. Any message sent or received by an LM is compliant with the CMP. This basically means it contains a pose in a common reference frame + features at that pose. A learning module is NOT able to communicate its internal models to other learning modules. It can only communicate summarized information of its internal state, such as the ID of the object it is currently sensing.
This is analogous to what the brain would be able to do. The model of an object (features at relative locations) is stored in the connections between neurons within a cortical column. Those are physical connections and can’t be communicated to other columns. However, a column can learn a pooled representation of the object (temporal pooling over activations after a sequence of observations) and communicate this object ID to other columns. This ID no longer contains information about the object shape or features.
A learning module can not “decide” where to send this information. Just as neurons can’t just specify where to send their spikes.
I’m not sure if this clarifies anything for you. I still don’t quite see the connection to moving through mental space. It’s probably a communication issue, as you seem to come from a very different background, and I am having difficulty finding the analogies to Monty.

I think the fundamental problem I’m having is that although the word “graph” is used all over the project, I haven’t found a definition that tells me how Monty’s use of it relates to graph theoretic notions, terminology, etc.

So, I’ll try to lay out the ways I use the word and try to relate this to Monty’s design (additions and corrections welcome…). I typically use the word “graph” in several related contexts, e.g.:

In practice, I use a kind of duck typing: “If it walks like a graph and it quacks like a graph, then it must be a graph”. So, if it can be dealt with in terms of graph theory (nodes and edges all the way down), I accept it and try to make the best use of its graphiness.

Graph Types

Looking at Monty, my take is that several types of graphs may be involved in a full-scale implementation:

Message Traffic

  • Each message has a source and some number of targets.
  • Most of the messages go from SMs to and within (sets of) LMs.
  • As modules (e.g., LMs, SMs) exchange (e.g., CMP) messages, they form a directed graph.
  • Although each message has a direction, the message paths tend to be bi-directional. For example, return (e.g., feedback, motor) messages may go in the opposite direction.
  • Although cycles might be present, it’s mostly a directed acyclic graph.

FWIW, ChatGPT said:

Me:

Analog computers are mostly DAGs, but some cycles may exist for (say) integration. Comment?

ChatGPT:

That’s a really interesting and nuanced observation — yes, analog computers are mostly directed acyclic graphs (DAGs), but feedback cycles do exist and are often essential for certain operations like integration, control systems, and solving differential equations. …

Connectivity

Regardless of whether modules are actively exchanging messages, some sort of connectivity needs to be set up before anything can be sent. In the brain, this role is played by axons, dendrites, spines, synapses, etc. Over time, new components (e.g., synapses) can form, making the graph more densely connected.

Running under some sort of Actor framework(s), Monty will need equivalent ways to handle message addressing and transport, module creation, etc. In an Elixir-based version of Monty, for example, the BEAM(s) on each compute node would provide the message addressing and transport fabric, spawning and setup of LMs, supervision trees (for fault tolerance), etc.

The BEAM’s message fabric allows any Elixir process to send a message to any other process (based on its node and ID). However, most Monty modules will only exchange messages with a relatively small number of neighbors, remote contacts, etc.

So, modules will need convenient and flexible ways to specify message recipients and/or senders. I’d suggest that they use paths, tags, wildcards, pub/sub, etc. Here are some (SciFi) possibilities:

region/column/level/port
V1/CC13/L4/sensory#modality=vision,priority=high
subscribe(Monty.PubSub, "V1/*/L4/*")
broadcast(Monty.PubSub, "V1/CC13/L4/sensory", message)

Conceptual

Trained LMs will form a conceptual graph of sorts. In particular, they will need to encode abstractions, relationships, etc. Let’s assume that a Monty instance has learned to recognize assorted objects: cups, logos, etc.

Somewhere in its graph of modules and messages, there will be LMs that recognize a cup, a Numenta cup, a logo, a Numenta logo, etc. Although these LMs wouldn’t need to have any textual tags indicating which (concrete or abstract) objects they recognize, it would clearly be useful for them to be tagged:

  • developers could use the information for debugging
  • tagged (and related) modules could be extracted, etc.
  • tags could be used to expose semantic information

Genetic

The act of spawning actors (e.g., LMs) creates a graph of parent-child relationships. Although this may end up as being just an implementation detail in some cases, it may have meaning if the new LMs are spawned because other LMs wanted this to happen.

In addition, if a collection of tagged (and/or related) modules is to be loaded into a Monty instance, some attention will need to be paid to details such as biases, module types and connectivity, etc. Basically, this is very akin to loading up a graph database.

Supervision Trees

Elixir uses “supervision trees” to provide fail-soft handling of errors. For example, a set of modules related to a particular sensor might get confused and stop operating correctly because of bad data. The relevant supervision tree could detect this, kill off and reload the modules, etc.

Although the relationships between working modules and supervision trees form a graph, this is mostly an implementation detail from Monty’s perspective.

Moving On…

In summary, it’s pretty clear that lots of graphs will be involved in and around any Monty instance. However, it’s not clear to me how Monty will pull all of this together. It seems to get into the area of composite objects and such, which still harbor open research questions.

So, I think it’s time for me to bail and ask for input…

@Rich_Morin just a note that I moved this into its own post as I thought it deserved its own topic.

1 Like

Thanks; that makes sense to me, as well. (Pedantic aside: “Its”, like “his” and “hers”, needs no apostrophe.)

1 Like

Hi @Rich_Morin, just stopping by because I like graphs too (and my favorite form of understanding graph is through their matrices, e.g. Laplacians, eigendecompositions, and spectral graph theory). :slight_smile:

Update: After writing the below response, I think the overall challenge / something we need to clarify is that our Learning Module is supposed to represent a cortical column, as opposed to say, a single neuron.

Sharing some thoughts on the different types of graphs you proposed:

Message Traffic & Connectivity

Hmm, we actually already have a graph for message traffic in the form of adjacency matrix. In our code, this is lm_to_lm_matrix (if there are more than 1 LMs), sm_to_lm_matrix, and lm_to_lm_vote_matrix.

I’m not sure if I would agree with you that the message traffic graph would be mostly a DAG. I think if you are thinking of this on the individual neuron/axon/dendrite/spines/synapse level, then yes, neurons mostly transmit signal in one direction, i.e. presynaptic neuron → synaptic cleft → postsynaptic neuron. However, we are modeling at the cortical column level, which has various feedforward and feedback connections.

(Sadly I know next to nothing about Elixir and BEAM so I can’t really comment on that portion… but it may be be useful when we are perhaps dealing with hundreds or thousands of LMs, or when we don’t want to pre-specify the connectivity via lm_to_lm_matrix, etc for some reason. Though I would think we would impose some biological constraints on what the connectivity should be - currently we have one sensor module connect to one learning module, and although in software it is possible to have multiple SM routing to one LM or vice versa, I don’t think this is biologically aligned.)

Conceptual

Again, I think you may be right if you are thinking of LM at a neuronal level, and then somehow we need to extract semantics or textual tags from some circuit connectivity with other LMs or neurons. However, in Monty, a single LM can already represent cups, logos, etc. (and hopefully abstract concepts soon). It’s not that the graph of connected LM nodes represent various concepts as a whole, but that a single LM can already represent these concepts on its own.

I’ll skip on the Genetic and Supervision Trees as I’m not as familiar as you on graph databases / Elixir / concept of “spawning”. Sorry!

Note: Please pardon if some of what I said has been echoed before in the thread.

1 Like

After reading this in your post:

… I like graphs too (and my favorite form of understanding graph is through their matrices, e.g. Laplacians, eigendecompositions, and spectral graph theory).

I gulped, looked at some Wikipedia pages, and then called on ChatGPT for some help:

Q: A commenter on the TBP forum recently said “… my favorite form of understanding graphs is through their matrices, e.g. Laplacians, eigendecompositions, and spectral graph theory”. I’m pretty solid on basic graph theory, but this approach seems totally incomprehensible to me. Help?

A: That comment reflects a fascinating and powerful way of understanding graphs—one that’s more about linear algebra than the usual combinatorics of graph theory. If you’re solid on basic graph theory, you already have a great foundation. Let’s break down what they mean and why this view is so popular in areas like data science, physics, and machine learning. …

Once I’ve (mostly) digested ChatGPT’s response and the WP pages, I may have something relevant to say. Meanwhile, others on this list may also find the response helpful.

Getting back to the Conceptual Graph question, I’d like to propose a simple scenario: A Monty instance is trained on a topically constrained set of items, including only kitchen gear (e.g., cups, forks, knives, plates, spoons).

It seems to me that the data from this instance could be characterized usefully as being “about kitchen gear” and put into an archive site for downloading and use. Is this unreasonable?

Hey Rich,

Yes, you can think of it that way, though I’d like to more precisely repeat what you said.

Let’s consider a Monty instance, comprised of sensor modules (touch and vision), and 10 learning modules (5 connected to touch, and 5 connected to vision). As Monty moves around and sees and touches the kitchen gear, the learning modules fill with information. That information is a set of sensations at locations held within each LM. The LMs connected to vision sensors store object locations with visual information (color), the LMs connected to the touch sensors store locations with touch information (temperature).

  • Each LM will learn encapsulated, isolated, representations of the objects that LM’s sensor senses.
  • Each of the LMs will have slightly different representations of the morphologies of objects.
  • Each of the LMs will have sensor-specific information about each sensed location.

So, when you say download it for use later, we’d actually be talking about 10 downloads, one download for each LM.

Interesting multi-modal side note: you could upload the data into 10 LMs that are connected to a completely new type of sensor, let’s say sonar sensors, and they could immediately recognize objects based on the morphology data that was learned in the original Monty setup. Monty solves the modality fusion problem that is very tricky for DNNs. Of course, the sensor-specific information (touch and vision) would no longer be helpful.

There are a lot of open questions around re-using data between different Monty instances that we haven’t considered yet, as we’re still in the exploratory stage of development.

  • Can you / should you consolidate models from multiple LMs → 1 LM?
  • Can you / should you consolidate models from LMs with different modalities?
  • How would this work in a layered-hierarchy setup of Monty?
  • Do you have to mirror the LM hierarchy for compositional object recognition?
  • Does the granularity of sensations of the source LM dictate the granularity of a target LM?

[edit] changed the word graph → set

3 Likes

Thanks for expanding and commenting on my scenario. Most of your note is clear to me, but once again I’m unsure what is meant by the word “graph”:

That information is a graph of sensations at locations held within each LM.

Clearly, data structures developed in the LMs will contain descriptive and contextual information (e.g., features, poses) for the kitchen gear. Please explain (e.g., in terms of nodes and edges) how this constitutes a “graph”? ELI5…

1 Like

I edited my response to use the word set instead of graph. Its just some set of data in the LM, the format doesn’t really matter.

Although your edit clarifies the intent of your posting, it doesn’t help much with the question of what the TBT white paper means in the several dozen places where it uses the word “graph” and related terms. Here is a boiled-down list of graph-related terminology used in the paper:

  • (3D, constrained, explicit, learned, object) graph
  • edge, node
  • graph (learning module, matching, memory, mismatch, points, representation, structure)
  • graph-based (LM, reference frame), graph-LM
  • (nearest) neighbor

The following text indicates that “features are nodes” and “displacements are edges”. Is this accurate and/or the whole story?

Figure 13: A Graph of features (nodes), linked by displacements (edges).

Usage Examples

Here, for reference, is a fairly exhaustive extract from the white paper, highlighting uses of graph-related terminology.

2.2 Core Principles

For example, object models are currently based on explicit graphs in 3D Cartesian space.

9.1 Different Phases of Learning

As such, models in graph memory are updated after every episode, and learning and inference are tightly intertwined.

To keep track of which objects were used for building a
graph (since we do not provide object labels in this unsupervised learning setup), we store two lists in each learning
module, mapping between learned graphs and the ground-truth objects observed in the world.

9.2 First Generation Learning Modules

We have experimented with several variants, but the majority of our
work has so far focused on LMs that leverage explicit, 3D
graphs
in Cartesian space. As such, these graph-based
LMs
can be considered the first-generation of possible implementations. You may see occasional references to a ‘feature’ or ‘displacement’-based graph-LM, however the evidence-based LM is the implementation that we use as the default for all of our current experiments as it is most robust to noise and sampling new locations.

In brief, it makes use of graph-based reference frames where the evidence score associated with any node in the graph can be iteratively adjusted.

We note that using explicit 3D graphs makes visualization
more intuitive, improves interpretability, and facilitates
debugging. This does not mean that we believe the brain
stores explicit graphs with Cartesian coordinates.

9.3 The Buffer (Short-Term Memory)

Its content is used to update the graph memory at the end of an episode.

9.4 The Graph Memory (Long-Term Memory)

Each learning module has one graph memory which it uses
as a long-term memory of previously acquired knowledge.
In the graph learning modules, the memory stores explicit
object models in the form of graphs in 3D Cartesian space.
The graph memory is responsible for storing, updating,
and retrieving models from memory.

9.5 Object Models

Object models are stored in the graph memory and contain
information about one object. The information they store
is encoded in reference frames and contains poses relative
to each other and features at those poses. More specifically, the model encodes an object as a graph with nodes.
Each node contains a pose and a list of features. Edge
information can be used in principle (storing important displacements), but is not currently emphasized. Furthermore,
graphs can generally be arbitrarily large in dimension and
memory, although we are now experimenting with a form
of constrained graphs that encourage intelligent use of
limited representational capacity

More specifically, the model encodes an object as a graph with nodes. Each node contains a pose and a list of features. Edge information can be used in principle (storing important displacements), but is not currently emphasized. Furthermore,
graphs can generally be arbitrarily large in dimension and
memory, although we are now experimenting with a form
of constrained graphs that encourage intelligent use of
limited representational capacity.

A graph is constructed from a list of observations (poses, features). Each observation can become a node in the graph, which in turn connects to its neighbors in the graph by proximity or temporal sequence, indicated by the edges of the graph. Each edge has a displacement associated with it, which is the action that is required to move from one node to the other. Each node can have multiple features associated with it or simply indicate that there was information sensed at that point in space. Each node must contain location and orientation information in a common, object-centric reference frame.

9.7 Graph Updates

If a graph is not stored in memory yet, the LM will not find a match during object recognition, and it will add a new graph to memory.

Even if the object is already stored in memory, there may be new features we can learn about it and incorporate into the graph.

If a new point is too similar to those already in the graph by some threshold (such as being close in space or having similar features), then the LM will not add the point to its long-term memory.

Figure 13: A Graph of features (nodes), linked by displacements (edges). Each node represents a relative location and stores three pose vectors (for example, the point normal and the two principal curvature directions). Nodes can also have pose-independent features associated with them, such as color and curvature. The graph stored in memory can then be used to recognize objects from actual feature-pose observations.

9.8 Using Graphs for Prediction and Querying Them

We can use graphs in memory to predict if there will be a feature sensed at the next location and what the next sensed feature will be, given an action/displacement (forward model). This prediction error can then be used for graph matching to update the possible matches and poses.

A graph can also be queried to provide an action that leads from the current feature to a desired feature (inverse model).

9.9 The Evidence-Based Learning Module

The evidence-based LM uses a graph representation of objects, with all of the elements described up until now.

9.10 Initializing Hypotheses

Figure 14: (top) Building a graph from a buffer of observations. First, similar observations (high spatial proximity and feature similarity) are removed, and then the observations are turned into a graph structure as described above.

10.3 Terminal Conditions

In our current experimental setup, we divide time into episodes. Each episode ends when a terminal state is reached. In the object recognition task, this is either no match (the model does not know the current object and we construct a new graph for it), match (we recognized an object as corresponding to a graph in memory), or time out (we took a maximum number of steps without reaching one of the other terminal states).

Figure 19: Information flow in a graph learning module … Once matching is completed, the list of features and poses in the buffer can be used to update the graph memory.

11.4.1 Policy Algorithm Details

To determine the most distinguishing part of the object to test, a form of “graph mismatch” is employed. This technique takes the most likely and second most likely object graphs, and using their most likely poses, overlays them in an internal (“mental”) space. It then determines for every point in the most likely graph, how far the nearest neighbor is in the second graph. The output is the point in the first graph that has the most distant nearest neighbor (Figure 25).

If the most likely object was a mug, and the second most likely object a can of soup, then a point on the handle of the mug would have the most-distant nearest neighbor to the can-of-soup graph.

For the graph-mismatch component, using the Euclidean distance between graph points is a reasonable heuristic for identifying potentially diagnostic differences in the structures of two objects.

Figure 25: … Once overlaid, the graph-mismatch technique proposes testing a part of the head of the spoon (red-spot, center) as it maximally distinguishes that graph from the other.

1 Like

@Rich_Morin, I too found our use of “graph” somewhat confusing.

The shift that made it coherent for me was to think of “graph” as in “graph of a mathematical function”, in other words, a “plot”, as in “a plot of all the points in a space”. A “point cloud” might be most accurate for today’s code.

There is some historical context for why it’s called “graph”. There is also some original context when we worked on a learning module that maintained displacement edges. However, that historical context is mostly gone for now, and “plot of all the points in a space” or a “point cloud” is, I believe, a better interpretation for what’s happening in the code today. It’s still technically a “graph”, but it’s not the “graph” everyone thinks of.

4 Likes

Hi Rich, I think Tristan beat me to the answer, but another take if you are interested: The main “graph” that we have is object graph (which is more of a pointcloud, as Tristan pointed out). The other terms - like graph (learning module, memory) are I think called that way because they work with these graph object models, but they themselves are not graph-like structures. For example, once Monty is trained, you can load the object graph from a Learning Module’s graph memory, which is just a dictionary with all the objects it has learned as keys. Currently, the object itself is an instance of torch_geometric.data.Data object.

Would saying that an object graph / object model is a spatial graph make more sense? It’s spatial in the sense that the nodes are located in 3D Euclidean space. Here, each node is positioned in some (x, y, z) coordinate (called locations), and each node holds some information (like pose vectors, rgba, etc.) called features (maybe “node attributes” is a clearer term?).


(This is Fig 24 from the white paper). I’m probably biased because I like graphs, but when I see the above figure I see “pointcloud / graph of object” instead of “a 3D scatterplot” :joy:

I hope I’m not causing more confusion…

Also on side note:

I gulped, looked at some Wikipedia pages, and then called on ChatGPT for some help:

Sorry if I led you down a rabbit hole on spectral graph theory! (Though I must admit it is an amazing rabbit hole) For my post before, I think the only thing you need to understand is what an adjacency matrix is. No spectral analysis involved. :slight_smile:

Saying that an LM’s 3D object model is defined as a collection of points in 3D Euclidean space (roughly, a point cloud) is completely fine with me. Problem is, that’s only one use case. If I say that @hlee is a TBP researcher, how does 3D Euclidean space inform that statement?

I think the basic problem is that, while the displacements (etc) are explicit, the edges are implicit. So, although the Numenta logo has a certain displacement from the “origin” of the Numenta cup, this isn’t really an “edge” in terms of graph theory, conceptual graphs, etc.

What I’d like to have is a way to tie Monty’s internal models to meanings that make sense to humans, LLMs, etc. I think that this might be accomplished by adding some tags and such, but this is venturing far outside of neurological approaches.

In any case, the Good News is that I now understand what’s going on. The Bad News, IMHO, is that this (mis)use of the word “graph” is needlessly confusing. Defining a new term (e.g., Learning Module) is fine, using an existing term in a new, unexplained, and confusing manner is not.

FWIW, I think terms like “object model” and “point cloud” have far less chance of confusing folks, down the road…

2 Likes

If I say that @hlee is a TBP researcher, how does 3D Euclidean space inform that statement?

I think this goes back to discussing about learning Abstract Concept in Monty, right? Here, the concept of “Hojae is a TBP researcher” doesn’t really fit into a physical space such as 3D Euclidean, as you mentioned. Currently we focus on Monty operating on 3D Euclidean space to do things like object recognition. While harder to imagine, I think Monty can learn in an abstract space (eventually) like space of “relationships”, for the “Hojae is a TBP researcher” example:


I think the above can be a graph that a Learning Module stores in its memory, and Monty could do “inference” like answering the question of: “how many total dogs do TBP researchers have?”

FWIW, I’ve thought a bit about what “general” space Monty can operate in (though I didn’t pursue it very far at that time). The gist of the thought was: what properties might Monty need? Can it be a topological space with just concept of “closeness”? Does it require a distance metric that satisfy positivity, symmetry, and triangle inequality? Does it need angles? Does the space need to be flat everywhere, or can it be a curved space with local flatness like Riemannian manifold? (I don’t have answers to these :sweat_smile:)

I think the basic problem is that, while the displacements (etc) are explicit, the edges are implicit. So, although the Numenta logo has a certain displacement from the “origin” of the Numenta cup, this isn’t really an “edge” in terms of graph theory, conceptual graphs, etc.

Yeah… though I don’t think this means we can never add edges for physical objects? For example, we could add an edge with attributes like timestep or movement action? (e.g. “moved up at a speed of X or force Y”) I could imagine how storing some speed information and letting Monty do a random walk could be used to infer the size of an object (e.g. let’s say I want to distinguish a big cup from small cup that is identical in features and morphologies - but if I happen to return to the same spot in less timesteps after randomly for one object than the other, then I might think this is the smaller object…) Or maybe there could be a self-loop edge if I tried to move in a direction wtih force Y, but that ended up in the same location…this could tell me something about the material of object (trying to “push” an object with Y newtons of force didn’t deform object). I’m completely spitballing here though…

What I’d like to have is a way to tie Monty’s internal models to meanings that make sense to humans, LLMs, etc. I think that this might be accomplished by adding some tags and such, but this is venturing far outside of neurological approaches.

I’m curious about your thoughts on the tagging approach. How would you add tags? Does it need to be exhaustive? Storing additional information as tags for objects (let’s say Monty learned fork, cup, and bowl) with “found from kitchen” is nice, but what about “can be used for pouring water”? This piece of attribute / tag would be very useful when Monty is trying to solve a larger goal of “put out fire”, but then there could be infinite attributes per object, and highly context dependent, right?

(Also, this feels somewhat related to object representation, and there was some discussion about that here: 2024/08 - Encoding Object Similarity in SDRs).

In any case, the Good News is that I now understand what’s going on. The Bad News, IMHO, is that this (mis)use of the word “graph” is needlessly confusing. Defining a new term (e.g., Learning Module) is fine, using an existing term in a new, unexplained, and confusing manner is not.

FWIW, I think terms like “object model” and “point cloud” have far less chance of confusing folks, down the road…

If you are up for it, I think this would make an excellent RFC with a great motivation section! :smiley:

1 Like