Calculating the confidence of a goal state

The following dead code is being removed from src/tbp/monty/frameworks/models/goal_state_generation.py soon. It has some notes sketched down, apparently for computing a confidence metric while taking into account a parent Learning Module.

Although it seems like the right choice to remove the dead code, I wanted to save the notes for later. It was created in the initial commit by @vclay – so attribution is a little unclear (especially with “TODO M”, which may refer to someone with a name starting with “M”).

    def _compute_goal_confidence(
        self, lm_output_confidence, separation, space_size=1.0, confidence_weighting=0.1
    ):
        """Calculate the confidence of the goal-state.

        The confidence is based on the e.g. separation in hypothesis-space between the
        two MLH, and the confidence associated with the MLH classificaiton of the parent
        LM. Currently just retuns the confidence of the parent LM but TODO M implement a
        more sophisticated function.

        TODO M How to normalize the displacement?
        Could put through a sigmoid, that is perhaps scaled by the size of the object?
        Could divide by e.g. the size of the object to make it likely to be <1, and
        then clip it; that way any subtle differences between LMs is likely to be
        preserved, i.e. rather than them all clipping to 1.0; can then just make
        sure this value is weighted heavily compared to confidence when computing
        the overall strenght of the goal-state.
        - size of the object could be estimated from the minimum and maximum corners
        - or use the max size of the graph --> Note this doesn't account for the
        actual size of the object, and these grid-models are not currently used

        Returns:
            The confidence of the goal-state.
        """
        # Provisional implementation:
        # squashed_displacement = np.clip(separation / space_size, 0, 1)
        # goal_confidence = squashed_displacement + confidence_weighting
        # * lm_output_confidence

        return lm_output_confidence
2 Likes

Thanks for documenting this here! I agree it makes sense to remove for now, and only add back when we actually use it but thanks for saming the comments here.

This was actually written by @nleadholm. The letters behind some of the TODOs are an inconsistent internal convention we used for a while for TODOs associated with different parts of the project. I think the M refers to motor. We also have TODO H for hierarchy, TODO S for State/CMP related work, and TODO O for optimization tasks. But this key probably just lives in Niels and my memory by now :smiley:

3 Likes