About compositionality and heterarchy

@ElyMatos and @tslominski,

Typing this up on my phone. Hopefully everything is readable enough. Tslominki, I’m including you here so you’ll hopefully have the opportunity to point out any inaccuracy I might have make.

Alright, here we go…

The way aggregation works is actually pretty interesting. As it turns out, an LM does not actually “average out” or “aggregate” the sum of all its graph’s nodes, or at least it doesn’t do it in the way we’re imagining it.

What actually happens is that within that graph there exists a single node by which the entirety of the graph will become represented by. As an analogy, imagine there is a village of people who assign a delegate to represent them. It’s like this.

Now, we need to have a way to determine which node will become this delegate. The process by which we can do this is called evidence accumulation.

Each node has attributes (features) like spatial location, curvature, color, or orientation. During evidence updates, these features are compared against observations. You can liken this to making a prediction about the world, then comparing that prediction against reality. The closer your prediction is to the real-world observation, the higher the evidence.

Now, what’s interesting here is how a nodes neighbors can affect its own evidence score. For example, if your neighboring nodes are shown to be accurate, then the evidence score of your own node will increase proportionality.

A good way to view this, in my opinion, is to imagine that you own a home in a neighborhood. Now imagine that one of your neighbors completes a bunch of home improvement projects, thereby raising the property value of their house. This is obviously good for their own home’s resale value, however it will also increase the value of your home too, due to your shared proximity. Neighboring nodes are exactly like this. Also, if it wasn’t obvious, inaccurate neighboring nodes will just as easily decrease the evidence score of your node. It works both ways.

The main factors which contribute to a nodes evidence score are the following:

  • Feature matching: How well the observed features match the node’s predicted features

  • Displacement Matching: How well the node’s pose aligns with the observed displacement or movement.

  • Voting Inputs: Evidence from other LMs. We haven’t talked about this one much, but the evidence of neighboring LMs can affect the global evidence space of your own LM.

So now, we finish calculating all of these evidence scores across all of the nodes of our given graph. We then select the node with the highest evidence score, giving it the designation of most_likely_hypothesis (MLH).

Here’s the bit of code which I believe handles this:

mlh_id = np.argmax(self.evidence[graph_id])
mlh = self._get_mlh_dict_from_id(graph_id, mlh_id)

That first line (mlh_id) performs the actual indexing of the node. The second line (mlh) grabs all of that node’s relevant information.

By doing the above we end up getting a single representative node which serves as a proxy for the entirety of the graph. Later, this representative node gets packaged up by the get_output function:

mlh = self.get_current_mlh()
pose_features = self._object_pose_to_features(mlh[“rotation”].inv())
object_id_features = self._object_id_to_features(mlh[“graph_id”])

Pose_features converts the MLH’s rotation into feature vectors, whereas object_id_features encodes the object ID into features. In this way an object ID is a feature.

Now then, there’s a ton of really cool things here we haven’t talked about yet. For instance, the way LM voting plays into this (both laterally and hierarchically), or the way evidence constraints are established (they basically operate bounded between -1 to 2). We also haven’t mentioned how the evidence_based.py file is basically scaffolded onto the graph_matching.py file, and how the latter contributes to all of this.

These other things seem pretty important, but maybe not critical to our understanding the gist of the process. That, plus it’s getting pretty late and I’m tired.

But anyways, I hope this helps answer your question, at least a little. Please don’t hesitate to ask if you need any clarification on anything. Until then, have a good weekend!

2 Likes