MIT-SPARK / Hydra

BSD 2-Clause "Simplified" License
626 stars 73 forks source link

[QUESTION] How does the new code obtain the semantic pointclouds #58

Open MorenMoren opened 4 months ago

MorenMoren commented 4 months ago

First of all, thank you for releasing such an interesting work.

I have looked the issue #1,and i'm preparing to adopt my own Semantic Segmentation Network. However,in the newly published code, i don't see any launch file using the nodelet/depth_image_proc/point_cloud_xyzrgb(while in the old version,i see it in kimera-semantics).

So my question is : How does the newly published code obtain the Semantic PointCloud.

nathanhhughes commented 3 months ago

Thanks for your interest in our work! In the newest release, there are now multiple options for providing input to Hydra. Going the previous route (publishing the labels as colors and using depth_image_proc/point_cloud_xyzrgb to turn that into a pointcloud) still works, but we've also added the option for hydra to subscribe to the rgb image, depth image and semantic labels directly. Additionally, the label image can be a single channel integer image where each pixel is the corresponding class (as we've moved away from Kimera-Semantics and the design decision to use colors to encode semantics). Most datasets that we have now use this new interface (which is why the depth_image_proc nodelet is no longer used). It is on my list of things to do to update #1 and provide a more comprehensive guide, but I'm not sure when I'll be able to get to it. In the meantime, I would suggest taking a look at the uhumans2 launch file to see how the new interface is used