RobotLocomotion / drake

Model-based design and verification for robotics.
https://drake.mit.edu
Other
3.35k stars 1.27k forks source link

visualize point clouds in meshcat #9891

Closed RussTedrake closed 5 years ago

RussTedrake commented 6 years ago

@kmuhlrad has an initial version in https://github.com/RobotLocomotion/6-881-backend/blob/km_lab_perception/psets/manipulation_station/perception.py

RussTedrake commented 5 years ago

Basic design

Constructor takes CameraInfo + meshcat zmq_url + meshcat pathname for the pointcloud to be visualized.

@system{ MeshcatRGBDVisualizer,
  @input_port{rgb_image}
  @input_port{depth_image}
  @input_port{camera_pose},
}

The camera_pose will just be an Isometry3d, because we want to be able to visualize raw sensor data. For my initial use case, this input port can be fixed. But dev/query_object.h has the calls I need (non-dev query_object.h does not) to support getting the pose out from a SceneGraph (https://github.com/RobotLocomotion/drake/blob/master/geometry/dev/query_object.h#L95).

The actual implementation will be a paired down version of https://github.com/RobotLocomotion/drake/blob/master/perception/depth_image_to_point_cloud.cc#L45 , which writes directly into a meshcat.geometry.PointCloud object. I think this is slightly better than inserting a DepthCameraToPointCloud system into the loop (which has to do more general handling of short returns, long returns, etc. and uses a data structure I don't need).

RussTedrake commented 5 years ago

Actually, the nice thing about implementing the alternative (where the input is a point cloud) is that we can visualize more generic things -- fused point-clouds, or point clouds that have been post-processed, etc. So I'm now thinking, instead:

@system{ MeshcatPointCloudVisualizer,
  @input_port{point_cloud},
}

where the point_cloud is in the world frame, and may or may not be colored, then offering a second system that converts the rgbd outputs into a point cloud in the world frame:

@system{ RGBDToPointCloud,
  @input_port{rgb_image}
  @input_port{depth_image}
  @input_port{camera_pose},
  @output_port{point_cloud}
}

where the camera_frame could be an optional input (allowing the system to convert the point cloud from camera_frame to the parent frame).

Unfortunately, it seems like we don't actually support colorized point clouds yet. So I think as a work-around I will do:

@system{ MeshcatPointCloudVisualizer,
  @input_port{point_cloud}
  @input_port{rgb_image (optional)},
}
@system{ DepthImageToPointCloud,
  @input_port{depth_image}
  @input_port{camera_pose (optional)},
  @output_port{point_cloud}
}

@kunimatsu-tri is there a philosophical reason that we don't have kXYZRGB as a supported point cloud field (yet)? https://github.com/RobotLocomotion/drake/blob/652b02bfebfb370cba525e475eb36bfb29e22dfd/perception/point_cloud_flags.h#L24

Ultimately, it seems like SceneGraph already knows all of this and generated the data and it would be cool if SceneGraph knew how to render these to arbitrary visualizers. So everything here is somehow a short-term work-around.

kunimatsu-tri commented 5 years ago

The comment in the original PR (#7014) says "This only contains fields for XYZ and extra fields (to focus the review). Colors and normals (and possibly other fields) will be added in subsequent PRs"

I guess it's time to implement it? @EricCousineau-TRI

RussTedrake commented 5 years ago

@kmuhlrad -- might you be able to knock this out now? I know you've got all of the pieces.

EricCousineau-TRI commented 5 years ago

Resolved by #10911 (Forgot to hoist this to the top-level descr. in the PR - that's the way things get auto-closed)