How could Monty speak? (Neurosymbolic Syntax)

Hi @Judah_Meek,

There are two types of information that you may be after here:

1. Monty’s final converged answer
Currently, the output of Monty that you probably care about are most_likely_object and most_likely_rotation. Both of these are output through the logging system. This could be where you could integrate an extension point as a proof of concept for connecting to a third party system. You can also look at the Wandb (src/tbp/monty/frameworks/loggers/wandb_handlers.py) or csv logger (src/tbp/monty/frameworks/loggers/monty_handlers.py). The logged .csv files contain a result column for seeing what Monty has converged on - this column could also contain a list of objects, or ‘no match’.

2. Hypothesis at every step
The CMP is the main output of the LM communicating it’s current hypothesis, if it is above a confidence threshold. If you want to know Monty’s hypothesis at every step, you can look at lm.get_current_mlh() which is a dictionary that includes the hypothesized object ID, orientation, location, and amount of evidence for this hypothesis.

@DLed has recently put together a live view extension that could be an interesting starting point as well. What if (?) Monty had a live view.

Will

2 Likes