Closed ouenal closed 2 years ago
Hi @ouenal, thank you for raising this question!
We noticed from Jiang et al. (GPC) that they didn't uniformly sample scans from the whole dataset. We answer this question from the following perspectives:
Hope the above answers your concerns. Please let us know if you have any other questions!
I would have to disagree with two of the statements that you've made.
Hi @ouenal, thanks for the follow-ups!
For the first comment:
For the second comment:
Thanks again for the comment and suggestion. Please let us know if you have any other questions!
Thanks for the back and forth. I'm sure that we will keep having some things to disagree on but it's a valuable discussion to have nonetheless. Data efficiency in LiDAR segmentation is still a fairly new topic and we have quite a lot to reseach and improve here as a community. Keep up the good work!
Hi @ouenal, thank you so much for sharing your thoughts and experience with us! Your comments have enlightened us to consider more practical scenarios when conducting experiments.
Yep, data-efficient LiDAR perception is the blue ocean, and let's keep up exploring more!
Hi @ouenal, long time no see! Here are some follow-ups for this issue:
Hi @ldkong1205, Is there any news about this different set-up?
Hi @ldkong1205, Is there any news about this different set-up?
Hi @yyliu01, thanks for your interest in this work!
Hi @ldkong1205,
Thanks so much for the solid work. We will follow up once the results have been released.
Best Regards, Yuyuan
In your paper (in Tab.2) I see that you compare to Jiang et al. (GPC). In GPC, the authors share that "for SemanticKITTI, considering that adjacent frames could have very similar contents", they try their best "to ensure that labeled and unlabeled data do not come from the same sequence." This implies that their labeled and unlabeled data split do not have the variety as your uniform sampling, thus a direct comparison is unfair.