AlibabaResearch / rcp

MIT License
25 stars 5 forks source link

Self-supervised training bugs #2

Open JXZxiao5 opened 2 years ago

JXZxiao5 commented 2 years ago

Thanks for the great project. When I reproduced the self-supervised experiment, there was a gap between our results and those published in the paper. There may be bugs in the code regarding self-supervised training. Can you provide logs or models on self-supervised training?

gxd1994 commented 2 years ago

Thank you for your attention to our work. You can train with 8 GPUs. Better results can be obtained. It may have a randomness, you can run it again.

gxd1994 commented 2 years ago

If you want to get higher performance in the setting of 40000 points, you can choose to retrain, or only perform finetune in the second stage using 40000 points.

JXZxiao5 commented 2 years ago

If you want to get higher performance in the setting of 40000 points, you can choose to retrain, or only perform finetune in the second stage using 40000 points.

Yes, I do that, but it cannot work due to the heavy cost of memory on 40K points. I levarage the supervised pre-trained model relaeased to conduct the experiments in the setting of 10K, 20K, 25K, 40K points, and find that EPE3D metrics are also increasing as the number of points increases. I don't know if there is something wrong with my experiments and how I can explain it.