youquanl / Segment-Any-Point-Cloud

[NeurIPS'23 Spotlight] Segment Any Point Cloud Sequences by Distilling Vision Foundation Models
https://ldkong.com/Seal
555 stars 25 forks source link

Details of Image Segmentation with SAM #2

Open ramdrop opened 1 year ago

ramdrop commented 1 year ago

Thanks for sharing this cool project! I was confused by how you segment 2D image using VFMs:

As a result, SAM is able to segment images, with either point, box, or mask prompts, across different domains and data distributions. (from 6.2 Vision Foundation Models)

What did you feed to SAM to get the final segmented 2D image?

Thanks for your explanation.

ldkong1205 commented 1 year ago

Thanks for your interest in our work!

We feed each RGB image from the multi-view cameras to SAM (and other VFMs) for generating the one-channel image (of the same size as the input image), where each pixel has a mask ID corresponding to a distinct superpixel.

See the figures below for an example.

Input Output

The code for generating superpixels via the used VFMs will be available soon. Kindly refer to our code for more details.

We will also upload our generated superpixels to Google Drive later. Stay tuned!

ldkong1205 commented 1 year ago

Hi @ramdrop, the code for generating semantic superpixels on the nuScenes dataset is out. Kindly refer to SUPERPOINT.md for the detailed instructions.

Our generated superpixels will be available very soon.