Closed dlsrbgg33 closed 3 years ago
Hi Shin, Unfortunately, it is not possible to make PyTorch trainings to reproduce 100% the same numbers each time. I tried to make it reproducible as much as possible, i.e. by setting the seeds of the workers in the dataloaders.
Best, Max
Hi Max,
Thank you for your quick reply. I understand there is always a randomness issue on PyTorch. For your result on paper, did you run several times and average them?
Best, Shin
Hi Shin, No I only ran once. Otherwise, it would require a lot of resources.
Best, Max
Hi Max, Thank you!
Best, Shin
Thank you for your great work and contribution.
I've run your provided code and found out that the result performance is quite different from the proposed one. Following is the detail for the experiment.
Experiment type: Baseline (train_baseline.py with xmuda_baseline.yaml on A2D2 source and Semantic KITTI inference) Validation Best model: 2d (65k iter) / 3d (70k iter) Test result: 37.46 (2d) / 35.24 (3d) / 44.44 (Ensemble)
I think the above result is quite different.
Even though I run the same code twice, the loss becomes different on the same iteration.
Could you help me to solve this issue?
Thank you in advance
Best, Shin