Open LiewFeng opened 1 year ago
@LiewFeng Hi, I'm a little short on computing resources here in my place, so I checked my previous training logs. And here are the results of last 10 epoches. And here is another one. And it seems that the results are unstable, even if the data is larger than now.
So I suggest you try it a few more times with different random seeds to see the performances. And you may also use a larger batch size (I use the batch size 16). I' m not very sure whether different softwares and hardwares are to blame for this. And if you can't exactly get the performance as that in README, my suggestion is to set your own baseline performance and move on with the experiments, and this is more important. Hope these can help you.
Hi, @Cc-Hy . I run the code on kitti raw on 2 GPUs without any modification, but still find that the performance is not very stable. Firstly, the performance is not stable during final 10-epoch training, getting 1 or 2 point difference of the last two epochs. Secondly, performance of different runs gets 1 or 2 point difference. And they are all lower than performance on README, especially for the easy setting, 2 points lower 3 points lower. Any suggestions?