Open ydhongHIT opened 4 years ago
Using ls is a comon and basic trick in ImageNet training. It can help avoid overfitting and we just follow the previous works to use it. It can bring about 0.2 improvement in res2net. You can remove it if you want to compare with our result. Except for the resnet we use the torchvison results, other baseline results are all reproduced by this code. Actually this training code is not good enough as we were told that some guys achieve better results when using their training code and the same training strategy.
Using ls is a comon and basic trick in ImageNet training. It can help avoid overfitting and we just follow the previous works to use it. It can bring about 0.2 improvement in res2net. You can remove it if you want to compare with our result. Except for the resnet we use the torchvison results, other baseline results are all reproduced by this code. Actually this training code is not good enough as we were told that some guys achieve better results when using their training code and the same training strategy.
Thanks for your reply. I noticed that the original resnet train code used colorjitter and lambda(Lighting). Do you know how these two transforms impact the final test accuracy?
Sorry, I didn't test those data augmentation. I just follow the common augmentation settings to train our model.
I notice that you used label-smooth in your train code. And the results in the paper are obtained by this code. I think that the comparison may be unfair due to this trick.