Hi, I am trying to run the Omniglot tutorial with the latest code, but I encountered a KeyError. I am using WSL2 environment.
1. Steps to reproduce: I cloned the repository freshly and ran the following command:
Bash
python run.py experiment=tutorial/omniglot_training
2. Error Log:
Error executing job with overrides: ['experiment=tutorial/omniglot_training']
Traceback (most recent call last):
...
File "/home/ptg/tbp/tbp.monty/src/tbp/monty/frameworks/models/sensor_modules.py", line 425, in update_state
sensor = agent.sensors[SensorID(self.sensor_module_id + ".rgba")]
KeyError: 'view_finder.rgba'
3. My Observation: It seems like the code expects a view_finder sensor, but the tutorial/omniglot_training configuration might be missing the sensor_module definition for it. (The surface agent experiments work fine with the view_finder config).
Has the config for Omniglot not been updated for the latest changes? Could you guide me on how to fix this yaml file?
Thanks!
1 Like
Apologies @taegeon5846-lab, I confirm that the tutorial is broken and I can replicate the problem you’re seeing.
Unfortunately, I got to this late in the day and won’t be able to resolve this today.
FWIW, the tutorial configuration does include the view_finder sensor.
The problem seems to be that the OmniglotEnvironment does not return PrioprioceptiveState for the view_finder: tbp.monty/src/tbp/monty/frameworks/environments/two_d_data.py at e00656010a3438cc4be77feec943b3fe78bf1fa7 · thousandbrainsproject/tbp.monty · GitHub
You should be able to just copy the same proprioceptive data that patch uses:
def get_state(self) -> ProprioceptiveState:
loc = self.locations[self.step_num % self.max_steps]
sensor_position = np.array([loc[0], loc[1], 0])
return ProprioceptiveState(
{
AgentID("agent_id_0"): AgentState(
sensors={
SensorID("patch" + ".depth"): SensorState(
rotation=self.rotation,
position=sensor_position,
),
SensorID("patch" + ".rgba"): SensorState(
rotation=self.rotation,
position=sensor_position,
),
SensorID("view_finder" + ".depth"): SensorState(
rotation=self.rotation,
position=sensor_position,
),
SensorID("view_finder" + ".rgba"): SensorState(
rotation=self.rotation,
position=sensor_position,
),
},
rotation=self.rotation,
position=np.array([0, 0, 0]),
)
}
)
1 Like
@taegeon5846-lab this should now be fixed.
1 Like
Thanks for the update! The KeyError is completely gone.
However, I encountered a small TypeError in object_model.py. It seems keys was missing parentheses. I fixed it locally by changing self._graph.keys to self._graph.keys() on lines 136 and 141, and now the tutorial runs perfectly.
Thanks again for your help!
1 Like
Thank you for the additional report.
Unfortunately I am unable to duplicate the TypeError you ran into
.
It does look like self._graph.keys is missing parenthesis, but that code hasn’t changed in over a year.
Not sure if it helps to identify the difference, but here’s the configuration printed when I run:
python run.py experiment=tutorial/omniglot_training
benchmarks:
default_all_noise_params:
features:
pose_vectors: 2
hsv: 0.1
principal_curvatures_log: 0.1
pose_fully_defined: 0.01
location: 0.002
default_sensor_features:
- pose_vectors
- pose_fully_defined
- on_object
- hsv
- principal_curvatures_log
min_eval_steps: 20
pretrained_dir: ${path.expanduser:${oc.env:MONTY_MODELS}/pretrained_ycb_v11}
rotations_all_count: 14
rotations_all:
- - 0
- 0
- 0
- - 0
- 90
- 0
- - 0
- 180
- 0
- - 0
- 270
- 0
- - 90
- 0
- 0
- - 90
- 180
- 0
- - 35
- 45
- 0
- - 325
- 45
- 0
- - 35
- 315
- 0
- - 325
- 315
- 0
- - 35
- 135
- 0
- - 325
- 135
- 0
- - 35
- 225
- 0
- - 325
- 225
- 0
rotations_3_count: 3
rotations_3:
- - 0
- 0
- 0
- - 0
- 90
- 0
- - 0
- 180
- 0
experiment:
config:
do_train: true
do_eval: false
show_sensor_output: false
max_train_steps: 1000
max_eval_steps: 500
max_total_steps: 6000
n_train_epochs: 1
n_eval_epochs: 3
model_name_or_path: ''
min_lms_match: 1
seed: 42
supervised_lm_ids: all
logging:
monty_log_level: SILENT
monty_handlers: []
wandb_handlers: []
python_log_level: WARNING
python_log_to_file: true
python_log_to_stderr: true
output_dir: ${path.expanduser:${oc.env:MONTY_MODELS}/my_trained_models}
run_name: omniglot_training
resume_wandb_run: false
wandb_id:
_target_: wandb.util.generate_id
wandb_group: debugging
monty_config:
motor_system_config:
motor_system_args:
policy_args:
file_name: null
good_view_percentage: 0.5
desired_object_distance: 0.03
use_goal_state_driven_actions: false
switch_frequency: 1.0
min_perc_on_obj: 0.25
agent_id: ${monty.agent_id:agent_id_0}
action_sampler_args:
actions:
- ${monty.class:tbp.monty.frameworks.actions.actions.LookUp}
- ${monty.class:tbp.monty.frameworks.actions.actions.LookDown}
- ${monty.class:tbp.monty.frameworks.actions.actions.TurnLeft}
- ${monty.class:tbp.monty.frameworks.actions.actions.TurnRight}
- ${monty.class:tbp.monty.frameworks.actions.actions.SetAgentPose}
- ${monty.class:tbp.monty.frameworks.actions.actions.SetSensorRotation}
rotation_degrees: 1.0
action_sampler_class: ${monty.class:tbp.monty.frameworks.actions.action_samplers.ConstantSampler}
policy_class: ${monty.class:tbp.monty.frameworks.models.motor_policies.InformedPolicy}
motor_system_class: ${monty.class:tbp.monty.frameworks.models.motor_system.MotorSystem}
monty_args:
num_exploratory_steps: 1000
min_eval_steps: 3
min_train_steps: 3
max_total_steps: 2500
monty_class: ${monty.class:tbp.monty.frameworks.models.graph_matching.MontyForGraphMatching}
learning_module_configs:
learning_module_0:
learning_module_class: ${monty.class:tbp.monty.frameworks.models.displacement_matching.DisplacementGraphLM}
learning_module_args:
k: 5
match_attribute: displacement
sensor_module_configs:
sensor_module_0:
sensor_module_class: ${monty.class:tbp.monty.frameworks.models.sensor_modules.HabitatSM}
sensor_module_args:
sensor_module_id: patch
features:
- pose_vectors
- pose_fully_defined
- on_object
- principal_curvatures_log
save_raw_obs: false
pc1_is_pc2_threshold: 1
sensor_module_1:
sensor_module_class: ${monty.class:tbp.monty.frameworks.models.sensor_modules.Probe}
sensor_module_args:
sensor_module_id: view_finder
save_raw_obs: false
sm_to_agent_dict:
patch: ${monty.agent_id:agent_id_0}
view_finder: ${monty.agent_id:agent_id_0}
sm_to_lm_matrix:
- - 0
lm_to_lm_matrix: null
lm_to_lm_vote_matrix: null
env_interface_config:
env_init_func: ${monty.class:tbp.monty.frameworks.environments.two_d_data.OmniglotEnvironment}
env_init_args: {}
transform:
- _target_: tbp.monty.frameworks.environment_utils.transforms.DepthTo3DLocations
agent_id: ${monty.agent_id:agent_id_0}
sensor_ids:
- patch
resolutions: ${np.array:[[10, 10]]}
world_coord: true
zooms: 1
get_all_points: true
use_semantic_sensor: false
depth_clip_sensors:
- 0
clip_value: 1.1
train_env_interface_args:
alphabets:
- 0
- 0
- 0
- 1
- 1
- 1
characters:
- 1
- 2
- 3
- 1
- 2
- 3
versions:
- 1
- 1
- 1
- 1
- 1
- 1
train_env_interface_class: ${monty.class:tbp.monty.frameworks.environments.embodied_data.OmniglotEnvironmentInterface}
_target_: tbp.monty.frameworks.experiments.pretraining_experiments.MontySupervisedObjectPretrainingExperiment
episodes: all
num_parallel: 16
print_cfg: false
quiet_habitat_logs: true
1 Like