anthonysimeonov / ndf_robot

Implementation of the method proposed in the paper "Neural Descriptor Fields: SE(3)-Equivariant Object Representations for Manipulation"
MIT License
215 stars 34 forks source link

Obtaining a point cloud for the subject object #8

Open kirby516 opened 2 years ago

kirby516 commented 2 years ago

Hi!

In the simulation, you used pybullet's functionality to acquire segmentation, how did you acquire it for the actual device? Please let me know how to acquire only the point cloud of the target object with 4 cameras on a real robot. It would be great if you could publish your method or code. Best regards.

Julien-Gustin commented 1 year ago

Hi @anthonysimeonov,

I am also interesting to know how you managed to get only the point cloud of the object without noise like the table it is on.

Best regards.

Sheradil commented 1 year ago

@Julien-Gustin I don't know how they did it, but one possible approach is:

  1. Setup the 4 cameras.
  2. Empty the area in between.
  3. Gather data from the cameras (without an object being placed).
  4. Store this data. It is the background.
  5. Place the object of interest in the center of the 4 cameras
  6. Gather data again
  7. For each pixel in the depth field, compare the current distance with the background distance. If the current distance is less than the background distance, that pixel belongs to an object that is placed in the area.

That's how you can locate objects. You should subtract a small epsilon from the background to compensate sensor noise.