StanfordVL / OmniGibson

OmniGibson: a platform for accelerating Embodied AI research built upon NVIDIA's Omniverse engine. Join our Discord for support: https://discord.gg/bccR5vGFEx
https://behavior.stanford.edu/omnigibson/
MIT License
516 stars 54 forks source link

the issue with seg_instance in obs_modalities #895

Closed RookieXwc closed 1 month ago

RookieXwc commented 1 month ago

@cgokmen Hello! I tried running the demo python -m omnigibson.examples.robots.robot_control_example. This demo runs fine with Isaac Sim version 2023.1.1 and the main branch. However, with Isaac Sim version 4.1 and the og-develop branch, there is an issue with the seg_instance modality, and the error message is:

  File "/home/gpu/.local/share/ov/pkg/isaac-sim-4.1.0/OmniGibson/omnigibson/envs/env_base.py", line 499, in get_obs
    obs[robot.name], info[robot.name] = robot.get_obs()
  File "/home/gpu/.local/share/ov/pkg/isaac-sim-4.1.0/OmniGibson/omnigibson/robots/robot_base.py", line 283, in get_obs
    obs_dict[sensor_name], info_dict[sensor_name] = sensor.get_obs()
  File "/home/gpu/.local/share/ov/pkg/isaac-sim-4.1.0/OmniGibson/omnigibson/sensors/sensor_base.py", line 78, in get_obs
    obs, info = self._get_obs()
  File "/home/gpu/.local/share/ov/pkg/isaac-sim-4.1.0/OmniGibson/omnigibson/sensors/vision_sensor.py", line 304, in _get_obs
    self._remap_modality(modality, obs, info, raw_obs)
  File "/home/gpu/.local/share/ov/pkg/isaac-sim-4.1.0/OmniGibson/omnigibson/sensors/vision_sensor.py", line 323, in _remap_modality
    obs[modality], info[modality] = self._remap_semantic_segmentation(obs[modality], id_to_labels)
  File "/home/gpu/.local/share/ov/pkg/isaac-sim-4.1.0/OmniGibson/omnigibson/sensors/vision_sensor.py", line 381, in _remap_semantic_segmentation
    assert set(image_keys.tolist()).issubset(
AssertionError: Semantic segmentation image does not match the original id_to_labels mapping.

I printed out image_keys and replicator_mapping in the _remap_semantic_segmentation function of vision_sensor.py and found that replicator_mapping is missing 9: 'window' and 10: 'picture'. What could be the reason for this?

image_keys: tensor([ 2,  3,  4,  5,  6,  8, 10, 11], dtype=torch.int32)
replicator_mapping: {0: 'background', 1: 'unlabelled', 2: 'bottom_cabinet', 3: 'agent', 4: 'straight_chair', 5: 'walls', 6: 'breakfast_table', 8: 'floors', 11: 'electric_switch'}
saliteta commented 1 month ago

Same Problem Here: I have no idea how the semantic segmentation code is working. Here part of my code to save the images and segmentation masks:

def camera_saving(camera_mover:VisionSensor, 
                  env: og.Environment, 
                  saving_path: Path, 
                  position: np.ndarray,
                  orienation:np.ndarray,
                  debug: bool = False,
                  debug_path: Path = None):
    # update camera mover location
    camera_mover.set_position_orientation(
        position=position,
        orientation=orienation
    )
    for _ in range(4): 
        env.step(np.array([0,0]))  # update the environment
    current_pose, current_orienation = camera_mover.get_position_orientation()
    np.allclose(position, current_pose, atol=1e-2) & np.allclose(orienation, current_orienation, atol=1e-2), \
    f"The camera setting is not correct, we have current_pose: {current_pose} set pose {position}, current_orientation: {current_orienation}, new orientation {orienation}"

    obs = camera_mover.get_obs()[0]
    saved_dict = {
        'position' : position,
        'orientation' : orienation,
        'rgb' : obs["rgb"][..., :3][..., ::-1], 
        'depth' : obs["depth_linear"],
        'semantics' : obs["seg_semantic"]
    }
    np.savez_compressed(saving_path, **saved_dict)
    if debug and debug_path:
        rgb_image = saved_dict['rgb']
        cv2.imwrite(f'{debug_path}', img = rgb_image)

And here is how to enable the camera:

    cam_mover = VisionSensor(
            prim_path="/World/viewer_camera",  # prim path
            name="my_vision_sensor",  # sensor name
            modalities=["rgb","depth_linear", "seg_semantic"],  # sensor mode
            enabled=True,  # enale sensor
            image_height=480,  # 
            image_width=640,  # 
            focal_length=17,  
            clipping_range=(0.01, 1000000.0),  # vision distance,
        )

And the error is:

  File "/butian/free_exploration/way_points.py", line 70, in camera_saving
    obs = camera_mover.get_obs()[0]
  File "/omnigibson-src/omnigibson/sensors/sensor_base.py", line 76, in get_obs
    obs, info = self._get_obs()
  File "/omnigibson-src/omnigibson/sensors/vision_sensor.py", line 274, in _get_obs
    obs[modality], info[modality] = self._remap_semantic_segmentation(obs[modality], id_to_labels)
  File "/omnigibson-src/omnigibson/sensors/vision_sensor.py", line 313, in _remap_semantic_segmentation
    assert set(np.unique(img)).issubset(set(replicator_mapping.keys())), "Semantic segmentation image does not match the original id_to_labels mapping."
hang-yin commented 1 month ago

Hello all! This is a regression introduced by recent isaac sim version updates. Here's the fix to this problem, and it is already merged into og-develop last Friday: https://github.com/StanfordVL/OmniGibson/pull/885 Please let me know if it works for you all!

RookieXwc commented 1 month ago

Sorry, it still seems to have issues. But everything works well under Isaac Sim version 2023.1.1 and the main branch.

 File "/home/gpu/.local/share/ov/pkg/isaac-sim-4.1.0/OmniGibson/omnigibson/envs/env_base.py", line 569, in _post_step
    obs, obs_info = self.get_obs()
  File "/home/gpu/.local/share/ov/pkg/isaac-sim-4.1.0/OmniGibson/omnigibson/envs/env_base.py", line 499, in get_obs
    obs[robot.name], info[robot.name] = robot.get_obs()
  File "/home/gpu/.local/share/ov/pkg/isaac-sim-4.1.0/OmniGibson/omnigibson/robots/robot_base.py", line 281, in get_obs
    obs_dict[sensor_name], info_dict[sensor_name] = sensor.get_obs()
  File "/home/gpu/.local/share/ov/pkg/isaac-sim-4.1.0/OmniGibson/omnigibson/sensors/sensor_base.py", line 78, in get_obs
    obs, info = self._get_obs()
  File "/home/gpu/.local/share/ov/pkg/isaac-sim-4.1.0/OmniGibson/omnigibson/sensors/vision_sensor.py", line 307, in _get_obs
    self._remap_modality(modality, obs, info, raw_obs)
  File "/home/gpu/.local/share/ov/pkg/isaac-sim-4.1.0/OmniGibson/omnigibson/sensors/vision_sensor.py", line 328, in _remap_modality
    obs[modality], info[modality] = self._remap_instance_segmentation(
  File "/home/gpu/.local/share/ov/pkg/isaac-sim-4.1.0/OmniGibson/omnigibson/sensors/vision_sensor.py", line 453, in _remap_instance_segmentation
    semantic_label = semantic_img[img == key].unique().item()
RuntimeError: a Tensor with 2 elements cannot be converted to Scalar
hang-yin commented 1 month ago

Hi @RookieXwc , so sorry for the trouble! This seems to be another isaac sim regression that we are not aware of. This is going to need a more involved fix, but for now, here's a hot fix that should help: https://github.com/StanfordVL/OmniGibson/pull/900

I have reproduced your error with the robot control example in Rs_int, and have fixed it with the above branch. Please let me know if this works!

RookieXwc commented 1 month ago

Thank you for the timely fix. It seems that there are only some visualization related issues left here.

2024-09-23 07:27:29 [132,903ms] [Warning] [omnigibson.sensors.vision_sensor] Some semantic IDs in the image are not in the id_to_labels mapping. This is a known issue with the replicator and should only affect a few pixels. These pixels wilPressed None. Action: [0.0, 0.0]
Traceback (most recent call last):
  File "/home/gpu/.local/share/ov/pkg/isaac-sim-4.1.0/OmniGibson/omnigibson/utils/ui_utils.py", line 810, in keyboard_event_handler
    self.robot.visualize_sensors()
  File "/home/gpu/.local/share/ov/pkg/isaac-sim-4.1.0/OmniGibson/omnigibson/robots/robot_base.py", line 391, in visualize_sensors
    ob = segmentation_to_rgb(ob, N=256) / 255.0
  File "/home/gpu/.local/share/ov/pkg/isaac-sim-4.1.0/OmniGibson/omnigibson/utils/vision_utils.py", line 179, in segmentation_to_rgb
    use_colors = randomize_colors(N=N, bright=True)
  File "/home/gpu/.local/share/ov/pkg/isaac-sim-4.1.0/OmniGibson/omnigibson/utils/vision_utils.py", line 159, in randomize_colors
    colors[0] = [0, 0, 0]  # First color is black
Pressed None.
hang-yin commented 1 month ago

@RookieXwc The visualization issue should be fixed here: https://github.com/StanfordVL/OmniGibson/pull/901

We will soon have another code release, but before that happens, I'll push bug fixes to this branch.