Ghostish / Open3DSOT

Open source library for Single Object Tracking in point clouds.
MIT License
242 stars 38 forks source link

Unstable training results #22

Closed sallymmx closed 2 years ago

sallymmx commented 2 years ago

For BAT: I have run two groups of training exps on Kitti Cars with the same given config(only changing the directory) under the same platform (given conda env and Cuda 11.0).

When testing, the results of two exps are quite different, evey model are tested two times (almost the same for the two times): (1) {'precision/test': 68.40597534179688, 'precision/test_epoch': 68.40597534179688, 'success/test': 54.71745681762695, 'success/test_epoch': 54.71745681762695} (2){'precision/test': 73.75078582763672, 'precision/test_epoch': 73.75078582763672, 'success/test': 59.293277740478516, 'success/test_epoch': 59.293277740478516}

When testing your given pretrained model (./pretrained_models/bat_kitti_car.ckpt) with two times also, your listed results could be achieved: {'precision/test': 78.87997436523438, 'precision/test_epoch': 78.87997436523438, 'success/test': 65.37126159667969, 'success/test_epoch': 65.37126159667969}

So, why does the training process be so unstable? One reason maybe that the training and testing samples are too little to obtain stable results?

Do you also meet with such a phenomenon? How do you do the exps? One time or multiple time to choose a best result? Otherwise the method is so unstable. We need a explanation about this. Looking forward about this issue.

Ghostish commented 2 years ago

Hi Sally,

We do not have this problem in the training. In our platform where we reproduce our BAT results (CUDA 10.1 pytorch 1.4.0 ), the training is stable for cars. Actually, the variance among different runs is less than 1 (in terms of precision ). I am not sure what causes you this but I think this may relate to incorrect dataset setup or the CUDA version.

The following closed issues might also help.

6 #5

sallymmx commented 2 years ago

Thanks for your reply.