ARISE-Initiative / robosuite

robosuite: A Modular Simulation Framework and Benchmark for Robot Learning
https://robosuite.ai
Other
1.29k stars 411 forks source link

Transformation matrices for multiple camera views #494

Closed oliviaylee closed 1 month ago

oliviaylee commented 3 months ago

Hi,

I am using Open3D to generate point clouds from RGB-D image observations. I am able to do this for a single viewpoint, but once I try to visualize multiple viewpoints, the visualizations of each viewpoint are oddly flat. I tried using ICP to align the point clouds, but they are misaligned despite hyperparameter tuning. (see the visuals attached to see how the point clouds evolve with more viewpoints + ICP)

Based on the flat visualization of multiple viewpoints, I suspect some of the transformation matrices are wrongly applied. May I clarify if the camera extrinsic matrix projects from camera coordinates to world coordinates, as I am trying to project each individual camera viewpoint to world coordinates and combine the point clouds in world coordinates.

frontview frontview_birdview frontview_birdview_icp

Thank you.

Steve-Tod commented 3 months ago

Hi, there are two potential issues that might cause this.

  1. The depth_trunc parameter is incorrect. https://www.open3d.org/docs/0.7.0/python_api/open3d.geometry.create_point_cloud_from_depth_image.html
  2. The depth itself is incorrect.

I would suggest first plotting the depth as images using plt.imshow and check the values of the depth image.

kevin-thankyou-lin commented 1 month ago

Hi @oliviaylee , I'm assuming this is resolved for now? Closing, but let me know if you need more help! This is certainly resolvable --- e.g. this repo's file has some code