Pointcept / SegmentAnything3D

[ICCV'23 Workshop] SAM3D: Segment Anything in 3D Scenes
https://arxiv.org/abs/2306.03908
MIT License
1.02k stars 70 forks source link

Value to determine camera angles #40

Open SjoerdBraaksma opened 1 year ago

SjoerdBraaksma commented 1 year ago

Hello! I would like to use this package to swgment my own pointcloud data. However, it does not contain RGB values. Also, I don't have a pre-segmented ground truth to evaluate the outcome. My questikn is twofold:

1) is it advisable to use the number of returns/intensity value as a substitute for RGB? Do I need to rescale the values to fall into EGB value ranges?

2) is there a metric to determine you have accumulated enough training imagesfrom different angles from your pointcloud? I was thinking something like a pointwise-conteibution metric, determining how often a point has been captured in an image? Or, across multiple trainings sequences, stop when segments become more stable across predictions?

yhyang-myron commented 1 year ago

Hi!

  1. We use SAM to get the segment on RGB data. If you want to use some other data, maybe you should try if SAM works well either.
  2. Sorry, I didn't understand this issue very well. Do you mean the number of RGBDs used to build point clouds?
SjoerdBraaksma commented 1 year ago

Hi yhyang-myron!

1) I got the model to work on non-RGB data as well (although it's not as good), so that point is resolved.

2) Diving deeper into the model, I think this question is irrelevant. Sorry for asking!

I do have one further question though: I am following this medium post: https://medium.com/@OttoYu/point-cloud-segmentation-with-sam-in-multi-angles-add5a5c61e67 and the end result is a segmentation classification for each point, from each different angle. As you can see however, it segments sub-structures from objects (example: the chapel tower is an individual segment from the total chapel).

How would you go about to re-merge these sub-structures into the final objects, when you don't have a ground truth? Can I use the merging method you use (bi-directional merging) for that?

yhyang-myron commented 1 year ago

Hi, Indeed, SAM may result in segmentation results with different ranges when separating objects. After roughly reviewing the page, I think you could try bi-directional merging to re-merge these sub-structures. For example, set a fusion strategy that you want to integrate the relevant parts into the largest part. Or, when saving SAM results, try to keep the one with the largest coverage area as much as possible (if it is greater than a certain miou value). Just a small suggestion, which may not be accurate.

SjoerdBraaksma commented 1 year ago

Awesome! thank you for the quick and in-depth reply, I'm going to try some stuff out. I'll report back in this thread if we have found a nice (combination) of strategies. Amazing stuff you have made! I enjoy playing around with it.

yhyang-myron commented 1 year ago

Thanks for your interest in our work!