drprojects / superpoint_transformer

Official PyTorch implementation of Superpoint Transformer introduced in [ICCV'23] "Efficient 3D Semantic Segmentation with Superpoint Transformer" and SuperCluster introduced in [3DV'24 Oral] "Scalable 3D Panoptic Segmentation As Superpoint Graph Clustering"
MIT License
546 stars 71 forks source link

Odd Classification Behaviour #63

Closed Man4231 closed 6 months ago

Man4231 commented 7 months ago

Hi Damien,

I just wanted to say thanks again for a wonderful project and congratulations on your new paper.

I have converted your project slightly to train on a custom dataset and it works relatively well. except that it leaves certain points above the actual ground surface, floating in the air (which are actually vegetation points), in the point cloud as shown by the photo below.

image

Do you have any ideas how to mitigate/correct this issue or where to start looking/what to change?

Thanks!

drprojects commented 7 months ago

Hi @Man4231, thanks for your interest in our project !

Could you please share more details of what is not working as you'd like ? I can't tell what the colors in your image represent.

PS: if you are using this project, don't forget to give it a ⭐, it matters to us !

Man4231 commented 7 months ago

Hi Damien,

Thanks for such a quick response.

Thanks!

drprojects commented 7 months ago

Hi @Man4231

I have a few more questions to be 100% sure I understand.

The screenshot is an inferred cloud, the colour is simply an intensity wash (all of the points you see have been classified as ground).

  • So, the color we see corresponds to your predictions, with the colormap attributed to the "ground" class in your dataset ? Or is it the LiDAR intensity of your input cloud (ie something like NAG[0].intensity) ? This is the raw output as it has been classified from SPT but only with the "ground" class turned on.
  • As of now, the raw output of SPT is actually a classification on $P_1$ (ie the first level superpoint partition, what you find in NAG[1]). This means obtaining the corresponding classification on $P_0$ (ie the points, what you find in NAG[0]) requires a little bit of reindexing work (this is a feature I will soon release, by the way). Have you coded that yourself, using the indices in NAG[0].super_index ? Otherwise, I am not sure how you could show this illustration of predicted classification on the $P_0$ level. Are you maybe showing NAG[0].y in this illustration ?
drprojects commented 7 months ago

Without a reply from you, may I close this issue ?

Man4231 commented 7 months ago

Hi Damien,

Sorry for the delay.

Any ideas how we can fix this issue?

Thanks Man4231

drprojects commented 6 months ago

So, from your answer I am not 100% certain, but I assume you managed to produce full-resolution outputs from per-superpoint predictions using NAG[0].super_index.

You should visualize the superpoint partition $P_1$, to see if the points whose classification you find unsatisfying are isolated in their own segments or if ground segments are bleeding into vegetation a bit. To this end, we provide visualization tools in the repo that could help you with that, I invite you to have a look at the documentation and notebooks to get started.

I do not know what your downstream task is, but it seems to me that the model is still doing a pretty good job at separating ground and vegetation. If these "flying" ground points are a big problem for you, you can:

PS: if you are using this project, don't forget to give it a ⭐, it matters to us !