isaac-sim / IsaacLab

Unified framework for robot learning built on NVIDIA Isaac Sim
https://isaac-sim.github.io/IsaacLab
Other
2.09k stars 844 forks source link

[Bug Report] TiledCamera does not follow robot #745

Closed kensukenk closed 1 month ago

kensukenk commented 3 months ago

If you are submitting a bug report, please fill in the following details and use the tag [bug].

Describe the bug

When I use the normal Camera object and attach it to the wrist camera of a manipulator, the observations from the camera appropriately follow the wrist. However, when I switch to TiledCamera, the camera viewpoint doesn't move with the manipulator and stays at the position of the wrist at the first simulation step.

I understand that the Camera and TiledCamera will produce different images from the same position, but my issue lies primarily in that the TiledCamera does not stay attached to the robot.

Steps to reproduce

Modified version of '/omni.isaac.lab_tasks/omni/isaac/lab_tasks/manager_based/manipulation/lift/config/franka/joint_pos_env_cfg.py'

self.scene.camera = TiledCameraCfg(
            prim_path="{ENV_REGEX_NS}/Robot/panda_hand/front_cam",
            update_period=0.01,
            height=64,
            width=64,
            data_types=["rgb"],
            spawn=PinholeCameraCfg(
                focal_length=24.0, focus_distance=400.0, horizontal_aperture=20.955, clipping_range=(0.1, 20)
            ),
            offset=TiledCameraCfg.OffsetCfg(pos=(0.05, 0.0, -0.05), rot=( 0.7059418, -0.0353547, 0.0353265, 0.7065042 ), convention="ros"),
        )

If I switch TiledCameraCfg to CameraCfg, I get the expected behavior. However, I am unable to simulate many environments with the normal camera (which I believe is intended), so I am hoping to know if there is an easy fix to get the camera to stay attached to the manipulator wrist.

System Info

Describe the characteristic of your environment:

Additional context

Add any other context about the problem here.

Checklist

Acceptance Criteria

Add the criteria for which this task is considered done. If not known at issue creation time, you can add this once the issue is assigned.

Mayankm96 commented 3 months ago

Hi @kensukenk,

I saw this issue with one of our demos source/standalone/demos/cameras.py as well. The current solution is to disable fabric by setting the use_fabric as False in the simulation configuration.

https://github.com/isaac-sim/IsaacLab/blob/main/source/standalone/demos/cameras.py#L11-L12

I was told this issue is resolved in Isaac Sim 4.1, but when I tried today, it seems to persist. I am checking with the team to see if there's something on Isaac Lab's side that needs to be adapted as well for the fix.

kensukenk commented 2 months ago

Thank you for your quick response! So when I disabled fabric (use_fabric=False, device = 'cpu', use_gpu_pipeline=False, physx.use_gpu=False) and visualized the resulting images, I don't see the robot move at all. This is also the case when I visualize in the Isaac Sim GUI. However, if I print the joint pos/vel of the robot, I do see that it changes with each environment step. Would upgrading to IsaacSim 4.1 fix this?

kensukenk commented 2 months ago

Just messed around with it, I had to set livestream=1 in order for the camera position to update with fabric disabled. In case there ends up being a straightforward way to fix the issue without disabling fabric, I'll leave this issue unresolved for now.

(tangentially related but not sure if I should open another issue, is there a way to have two TiledCameras in the same environment? I get the following error when I try with TiledCamera '[omni.sensors.tiled.plugin] CUDA error 1: cudaErrorInvalidValue - invalid arg', but when I use the normal Camera, I can render and save from multiple camera views perfectly fine).

notFoundThisPerson commented 2 months ago

I have a similar problem, the image given by the gripper camera is few steps behind the current physics scene, so that I can't get the right transformation matrix of the point cloud, as shown in the image. The red point cloud is the result of the gripper camera, and the axes are the current camera pose read by gripper_camera.data. Obviously it's far away from the gripper hand, which is shown from the point cloud from a static camera. Only after I ran few env.sim.render() manually the rendering result is being correct. So is there any way to fix this problem? image The configuration is as follows:

@configclass
class SceneCfg(InteractiveSceneCfg):
    ...

    gripper_camera = CameraCfg(
        prim_path="{ENV_REGEX_NS}/robot/panda_hand/camera",
        height=480,
        width=640,
        data_types=["rgb", "distance_to_image_plane"],
        spawn=sim_utils.PinholeCameraCfg(
            focal_length=24.0,
            focus_distance=400.0,
            horizontal_aperture=53.7,
            clipping_range=(0.01, 1.0e5),
        ),
        offset=CameraCfg.OffsetCfg(
            pos=(0.05, 0, 0.05), rot=(0.70441603, -0.06162842, -0.06162842, 0.70441603), convention="ros"
        ),
    )

    static_camera = CameraCfg(
        prim_path="{ENV_REGEX_NS}/static_camera",
        height=480,
        width=640,
        data_types=["rgb", "distance_to_image_plane"],
        spawn=sim_utils.PinholeCameraCfg(
            focal_length=24.0,
            focus_distance=400.0,
            horizontal_aperture=53.7,
            clipping_range=(0.01, 1.0e5),
        ),
        offset=CameraCfg.OffsetCfg(
            pos=(2., 2.25, 2), rot=(-0.16793911, 0.3131961, 0.82376817, -0.44171333), convention="ros"
        ),
    )

@configclass
class EnvCfg(ManagerBasedEnvCfg):
    sim = sim_utils.SimulationCfg(dt=1 / 120.0, substeps=5, device="cpu", use_gpu_pipeline=False, use_fabric=True)
    scene = SceneCfg(num_envs=1, env_spacing=50)
    actions = ActioinsCfg()
    observations = DefaultObservationsCfg()
    events = RandomRobotSceneEvents()

    def __post_init__(self):
        self.decimation = 10

IsaacSim version: 4.0.0 IsaacLab version: 1.1.0

jih189 commented 2 months ago

In Manager based task, may i ask how could you visualize those pointcloud?

notFoundThisPerson commented 2 months ago

In Manager based task, may i ask how could you visualize those pointcloud?

I did not visualize the point cloud in IsaacLab, instead I saved the depth image and the camera pose and visualized them using point cloud library (PCL).

rthaker01 commented 1 month ago

Hi @kensukenk / @jih189 / @notFoundThisPerson - This should be resolved in the latest release of Isaac Lab 1.2/Isaac Sim 4.2. Please try and confirm.