Closed Glutton-zh closed 3 years ago
Hi, I found that upgrading to higher PyTorch version may be the possible reason of unstable training. You could try to incorporate the tricks mentioned here (like adding bounding box regression / Fast R-CNN branch and using OICR w/ more training iterations) into the PyTorch 0.4.0 version codes for more stable training.
Hi, I found that upgrading to higher PyTorch version may be the possible reason of unstable training. You could try to incorporate the tricks mentioned here (like adding bounding box regression / Fast R-CNN branch and using OICR w/ more training iterations) into the PyTorch 0.4.0 version codes for more stable training.
Thank you. I'm trying to do more iterations before the learning rate drops
Could you share your findings here after you get some numbers?
I do a few more experiments. I change the number of iterations to 75000 and reduce the learning rate at 55000. In other words, 5000 more iterations before reducing the learning rate. The results still fluctuate. 53.0 and 51.3 respectively. I use 1080Ti and pytorch1.6.0
Thanks for your update. You could try to downgrade to the PyTorch 0.4.0 version of codes for more stable training.
@Glutton-zh Have you solved your problem and achieved stable results?
Hello, I found that there is a big gap between the results of each training. I use your code and vgg16_ Voc2007.yaml has obtained the results of 52.3, 51.2, 50.2, etc., so it is difficult to determine the benefits after changing the code. Is there any way to reduce this fluctuation and increase the stability of the results?