facebookresearch / habitat-lab

A modular high-level library to train embodied AI agents across a variety of tasks and environments.
https://aihabitat.org/
MIT License
1.91k stars 477 forks source link

Multiple Objects in Pick Dataset Cause KeyErrors for in Pick Sensors #808

Closed mpiseno closed 1 year ago

mpiseno commented 2 years ago

Habitat-Lab and Habitat-Sim versions

Habitat-Lab: v0.2.1

Habitat-Sim: v0.2.1

Habitat is under active development, and we advise users to restrict themselves to stable releases of Habitat-Lab and Habitat-Sim. The bug you are about to report may already be fixed in the latest version.

Master branch contains 'bleeding edge' code, but we do appreciate bug reports for it!

šŸ› Bug

I generated a dataset that is supposed to have multiple types of objects on a table using the following command:

python -m habitat.datasets.rearrange.rearrange_generator --run --config data/table.yaml --num-episodes 10 --out data/table_pick.json.gz

Trying to run an episode via a simple example script leads to the following error:

Traceback (most recent call last):
  File "run.py", line 31, in <module>
    env.reset()
  File "/Users/michaelpiseno/src/habitat-lab/habitat/core/env.py", line 259, in reset
    observations=observations,
  File "/Users/michaelpiseno/src/habitat-lab/habitat/core/embodied_task.py", line 162, in reset_measures
    measure.reset_metric(*args, **kwargs)
  File "/Users/michaelpiseno/src/habitat-lab/habitat/tasks/rearrange/sub_tasks/pick_sensors.py", line 72, in reset_metric
    **kwargs
  File "/Users/michaelpiseno/src/habitat-lab/habitat/tasks/rearrange/rearrange_sensors.py", line 657, in reset_metric
    **kwargs
  File "/Users/michaelpiseno/src/habitat-lab/habitat/tasks/rearrange/sub_tasks/pick_sensors.py", line 99, in update_metric
    dist_to_goal = ee_to_object_distance[task.targ_idx]
KeyError: 0

Steps to Reproduce

Steps to reproduce the behavior:

  1. Generate a dataset using the config file below (note that the "num_samples" is not the default [1, 1] because I am trying to generate multiple objects). This works fine.
  2. Run Habitat using the other config file and the example script below. This gives the KeyError. NOTE: The exact same code/workflow works fine if I just try to set num_samples to [1, 1].

Dataset config:

---
dataset_path: "data/replica_cad/replicaCAD.scene_dataset_config.json"
additional_object_paths:
  - "data/objects/ycb/"
scene_sets:
  -
    name: "default"
    included_substrings:
      - "v3_sc0_staging_00"
    excluded_substrings: []
    comment: "The first macro variation from the 105 ReplicaCAD variations."

object_sets:
  -
    name: "kitchen"
    included_substrings:
      - "003_cracker_box"
      - "005_tomato_soup_can"
      - "008_pudding_box"
      - "011_banana"
      - "013_apple"
      - "024_bowl"
      - "025_mug"
    excluded_substrings: []

receptacle_sets:
  -
    name: "table"
    included_object_substrings:
      - "frl_apartment_table_01"
      - "frl_apartment_table_02"
      - "frl_apartment_table_03"
    excluded_object_substrings: []
    included_receptacle_substrings:
      - ""
    excluded_receptacle_substrings: []
    comment: "The empty substrings act like wildcards, selecting all receptacles for all objects."

scene_sampler:
  type: "subset"
  params:
    scene_sets: ["default"]
  comment: "Samples from ReplicaCAD 105 variations with static furniture."

object_samplers:
  -
    name: "kitchen_table"
    type: "uniform"
    params:
      object_sets: ["kitchen"]
      receptacle_sets: ["table"]
      num_samples: [2, 5] # samples num object in the range [min, max)
      orientation_sampling: "up"
      sample_region_ratio: 0.5
      unique: true

object_target_samplers:
  -
    name: "kitchen_table_targets"
    type: "uniform"
    params:
      object_samplers: ["kitchen_table"]
      receptacle_sets: ["table"]
      num_samples: [1, 1]
      orientation_sampling: "up"

Habitat config:

ENVIRONMENT:
    MAX_EPISODE_STEPS: 200
DATASET:
    TYPE: RearrangeDataset-v0
    SPLIT: train
    DATA_PATH: data/table_pick.json.gz
    SCENES_DIR: "data/replica_cad/"
TASK:
    TYPE: RearrangePickTask-v0
    MAX_COLLISIONS: -1.0
    COUNT_OBJ_COLLISIONS: True
    COUNT_ROBOT_OBJ_COLLS: False
    DESIRED_RESTING_POSITION: [0.5, 0.0, 1.0]

    # In radians
    BASE_ANGLE_NOISE: 0.15
    BASE_NOISE: 0.05
    CONSTRAINT_VIOLATION_ENDS_EPISODE: True
    FORCE_REGENERATE: False

    # Measurements for composite tasks.
    REWARD_MEASUREMENT: "rearrangepick_reward"
    SUCCESS_MEASUREMENT: "rearrangepick_success"

    # If true, does not care about navigability or collisions with objects when spawning
    # robot
    EASY_INIT: False

    TARGET_START_SENSOR:
        TYPE: "TargetStartSensor"
        GOAL_FORMAT: "CARTESIAN"
        DIMENSIONALITY: 3
    GOAL_SENSOR:
        TYPE: "GoalSensor"
        GOAL_FORMAT: "CARTESIAN"
        DIMENSIONALITY: 3
    ABS_TARGET_START_SENSOR:
        TYPE: "AbsTargetStartSensor"
        GOAL_FORMAT: "CARTESIAN"
        DIMENSIONALITY: 3
    ABS_GOAL_SENSOR:
        TYPE: "AbsGoalSensor"
        GOAL_FORMAT: "CARTESIAN"
        DIMENSIONALITY: 3
    JOINT_SENSOR:
        TYPE: "JointSensor"
        DIMENSIONALITY: 7
    END_EFFECTOR_SENSOR:
        TYPE: "EEPositionSensor"
    IS_HOLDING_SENSOR:
        TYPE: "IsHoldingSensor"
    RELATIVE_RESTING_POS_SENSOR:
        TYPE: "RelativeRestingPositionSensor"
    SENSORS: ["TARGET_START_SENSOR", "JOINT_SENSOR", "IS_HOLDING_SENSOR", "END_EFFECTOR_SENSOR", "RELATIVE_RESTING_POS_SENSOR"]
    ROBOT_FORCE:
        TYPE: "RobotForce"
        MIN_FORCE: 20.0
    EXCESSIVE_FORCE_SHOULD_END:
        TYPE: "ForceTerminate"
        MAX_ACCUM_FORCE: 5000.0
    ROBOT_COLLS:
        TYPE: "RobotCollisions"
    OBJECT_TO_GOAL_DISTANCE:
        TYPE: "ObjectToGoalDistance"
    END_EFFECTOR_TO_OBJECT_DISTANCE:
        TYPE: "EndEffectorToObjectDistance"
    END_EFFECTOR_TO_REST_DISTANCE:
        TYPE: "EndEffectorToRestDistance"
    DID_PICK_OBJECT:
        TYPE: "DidPickObjectMeasure"
    PICK_REWARD:
        TYPE: "RearrangePickReward"
        DIST_REWARD: 20.0
        SUCC_REWARD: 10.0
        PICK_REWARD: 20.0
        DROP_PEN: 5.0
        WRONG_PICK_PEN: 5.0
        USE_DIFF: True
        DROP_OBJ_SHOULD_END: False
        WRONG_PICK_SHOULD_END: False

        # General Rearrange Reward config
        CONSTRAINT_VIOLATE_PEN: 10.0
        FORCE_PEN: 0.001
        MAX_FORCE_PEN: 1.0
        FORCE_END_PEN: 10.0

    PICK_SUCCESS:
        TYPE: "RearrangePickSuccess"
        SUCC_THRESH: 0.15

    MEASUREMENTS:
        - "OBJECT_TO_GOAL_DISTANCE"
        - "ROBOT_FORCE"
        - "EXCESSIVE_FORCE_SHOULD_END"
        - "ROBOT_COLLS"
        - "END_EFFECTOR_TO_REST_DISTANCE"
        - "END_EFFECTOR_TO_OBJECT_DISTANCE"
        - "DID_PICK_OBJECT"
        - "PICK_SUCCESS"
        - "PICK_REWARD"
    ACTIONS:
        ARM_ACTION:
            TYPE: "ArmAction"
            ARM_CONTROLLER: "ArmRelPosAction"
            GRIP_CONTROLLER: "MagicGraspAction"
            ARM_JOINT_DIMENSIONALITY: 7
            GRASP_THRESH_DIST: 0.15
            DISABLE_GRIP: False
            DELTA_POS_LIMIT: 0.0125
            EE_CTRL_LIM: 0.015
    POSSIBLE_ACTIONS:
        - ARM_ACTION

SIMULATOR:
    ACTION_SPACE_CONFIG: v0
    AGENTS: ['AGENT_0']
    DEBUG_RENDER: False
    ROBOT_JOINT_START_NOISE: 0.0
    AGENT_0:
        HEIGHT: 1.5
        IS_SET_START_STATE: False
        RADIUS: 0.1
        SENSORS: ['HEAD_RGB_SENSOR', 'HEAD_DEPTH_SENSOR', 'ARM_RGB_SENSOR', 'ARM_DEPTH_SENSOR']
        START_POSITION: [0, 0, 0]
        START_ROTATION: [0, 0, 0, 1]
    HEAD_RGB_SENSOR:
        WIDTH: 128
        HEIGHT: 128
    HEAD_DEPTH_SENSOR:
        WIDTH: 128
        HEIGHT: 128
        MIN_DEPTH: 0.0
        MAX_DEPTH: 10.0
        NORMALIZE_DEPTH: True
    ARM_DEPTH_SENSOR:
        HEIGHT: 128
        MAX_DEPTH: 10.0
        MIN_DEPTH: 0.0
        NORMALIZE_DEPTH: True
        WIDTH: 128
    ARM_RGB_SENSOR:
        HEIGHT: 128
        WIDTH: 128

    # Agent setup
    ARM_REST: [0.6, 0.0, 0.9]
    CTRL_FREQ: 120.0
    AC_FREQ_RATIO: 4
    ROBOT_URDF: ./data/robots/hab_fetch/robots/hab_fetch.urdf
    ROBOT_TYPE: "FetchRobot"
    FORWARD_STEP_SIZE: 0.25

    # Grasping
    HOLD_THRESH: 0.09
    GRASP_IMPULSE: 1000.0

    DEFAULT_AGENT_ID: 0
    HABITAT_SIM_V0:
        ALLOW_SLIDING: True
        ENABLE_PHYSICS: True
        GPU_DEVICE_ID: 0
        GPU_GPU: False
        PHYSICS_CONFIG_FILE: ./data/default.physics_config.json
    SEED: 100
    SEMANTIC_SENSOR:
        HEIGHT: 480
        HFOV: 90
        ORIENTATION: [0.0, 0.0, 0.0]
        POSITION: [0, 1.25, 0]
        TYPE: HabitatSimSemanticSensor
        WIDTH: 640
    TILT_ANGLE: 15
    TURN_ANGLE: 10
    TYPE: RearrangeSim-v0

script that errors:

import os

import gym
import habitat
import habitat_baselines.utils.gym_definitions as habitat_gym
from habitat.utils.visualizations.utils import observations_to_image
from habitat_baselines.utils.render_wrapper import overlay_frame
from habitat_sim.utils import viz_utils as vut

def insert_render_options(config):
    config.defrost()
    config.SIMULATOR.THIRD_RGB_SENSOR.WIDTH = 512
    config.SIMULATOR.THIRD_RGB_SENSOR.HEIGHT = 512
    config.SIMULATOR.AGENT_0.SENSORS.append("THIRD_RGB_SENSOR")
    config.freeze()
    return config

config = insert_render_options(
    habitat.get_config(
        os.path.join(
            habitat_gym.config_base_dir,
            "configs/nat-rl/pick-table.yaml",
        )
    )
)

env = habitat.Env(config=config)

for i in range(env.number_of_episodes):
    env.reset()

    print(f"Agent acting inside environment | Episode {i}")
    count_steps = 0
    # To save the video
    video_file_path = f"visuals/example_interact_ep{i}.mp4"
    video_writer = vut.get_fast_video_writer(video_file_path, fps=30)

    while not env.episode_over:
        action = env.action_space.sample()
        observations = env.step(action)  # noqa: F841
        info = env.get_metrics()

        render_obs = observations_to_image(observations, info)
        #render_obs = overlay_frame(render_obs, info)

        video_writer.append_data(render_obs)

        count_steps += 1
    print(f"Episode {i} finished after {count_steps} steps.")

    video_writer.close()
    if vut.is_notebook():
        vut.display_video(video_file_path)

Thanks!

ASzot commented 2 years ago

Yes, we are aware of this issue and it is actually fixed at https://github.com/facebookresearch/habitat-lab/pull/774 which we are working on merging into main. Your command should work on that branch.