Open SjoerdBraaksma opened 1 year ago
Hi!
Hi yhyang-myron!
1) I got the model to work on non-RGB data as well (although it's not as good), so that point is resolved.
2) Diving deeper into the model, I think this question is irrelevant. Sorry for asking!
I do have one further question though: I am following this medium post: https://medium.com/@OttoYu/point-cloud-segmentation-with-sam-in-multi-angles-add5a5c61e67 and the end result is a segmentation classification for each point, from each different angle. As you can see however, it segments sub-structures from objects (example: the chapel tower is an individual segment from the total chapel).
How would you go about to re-merge these sub-structures into the final objects, when you don't have a ground truth? Can I use the merging method you use (bi-directional merging) for that?
Hi, Indeed, SAM may result in segmentation results with different ranges when separating objects. After roughly reviewing the page, I think you could try bi-directional merging to re-merge these sub-structures. For example, set a fusion strategy that you want to integrate the relevant parts into the largest part. Or, when saving SAM results, try to keep the one with the largest coverage area as much as possible (if it is greater than a certain miou value). Just a small suggestion, which may not be accurate.
Awesome! thank you for the quick and in-depth reply, I'm going to try some stuff out. I'll report back in this thread if we have found a nice (combination) of strategies. Amazing stuff you have made! I enjoy playing around with it.
Thanks for your interest in our work!
Hello! I would like to use this package to swgment my own pointcloud data. However, it does not contain RGB values. Also, I don't have a pre-segmented ground truth to evaluate the outcome. My questikn is twofold:
1) is it advisable to use the number of returns/intensity value as a substitute for RGB? Do I need to rescale the values to fall into EGB value ranges?
2) is there a metric to determine you have accumulated enough training imagesfrom different angles from your pointcloud? I was thinking something like a pointwise-conteibution metric, determining how often a point has been captured in an image? Or, across multiple trainings sequences, stop when segments become more stable across predictions?