yangyangyang127 / Seg-NN

[CVPR2024 Hightlight] No Time to Train: Empowering Non-Parametric Networks for Few-shot 3D Scene Segmentation
75 stars 4 forks source link

What will the performance be on the corrected experimental setting? #5

Open ZhaochongAn opened 2 months ago

ZhaochongAn commented 2 months ago

Hello, and thank you for sharing the codes! This paper is very interesting.

I also had one paper "Rethinking Few-shot 3D Point Cloud Semantic Segmentation" (Github link) accepted by CVPR2024. In it, we found that the current experimental setting has two significant issues. Especially, the current setting will leak the target class clues by the density difference between the sampled foreground and background points, which will make the few-shot problem much easier. And the scarcity of points in the current setup is also unrealistic. Then we propose a new, more reasonable experimental setting along with benchmarks for future fair evaluation. So, I would like to kindly inquire about what the performance of SegNN will be in the corrected few-shot setting, aming to help future researchers.

Thank you very much for the great work again!

yangyangyang127 commented 2 months ago

Thanks for your reminder and you have done a good job. I will run Seg-NN's code under COSeg's setting to evaluate our performance.