Open JXZxiao5 opened 2 years ago
Thank you for your attention to our work. You can train with 8 GPUs. Better results can be obtained. It may have a randomness, you can run it again.
If you want to get higher performance in the setting of 40000 points, you can choose to retrain, or only perform finetune in the second stage using 40000 points.
If you want to get higher performance in the setting of 40000 points, you can choose to retrain, or only perform finetune in the second stage using 40000 points.
Yes, I do that, but it cannot work due to the heavy cost of memory on 40K points. I levarage the supervised pre-trained model relaeased to conduct the experiments in the setting of 10K, 20K, 25K, 40K points, and find that EPE3D metrics are also increasing as the number of points increases. I don't know if there is something wrong with my experiments and how I can explain it.
Thanks for the great project. When I reproduced the self-supervised experiment, there was a gap between our results and those published in the paper. There may be bugs in the code regarding self-supervised training. Can you provide logs or models on self-supervised training?