Closed VisionEp1 closed 6 years ago
@VisionEp1 Hi,
It is normal. Usually, the more images - the higher loss - the higher accuracy (mAP) on the same test-dataset. So accuracy (mAP) is more important indicator than loss.
Use modern Volta/Turing GPU with CUDNN_HALF=1 in the Makefile, and/or use multi-GPU training after first 1000 iterations.
Usually you should train the same number of iterations as many images, i.e. ~20 million iterations. Or you should train at least 1 epoch = images/batch = 20mil/64 = 310 000 iterations.
Usually I get final avg loss about 0.5
. But I saw final avg loss 0.1
- 2.0
thanks another question: if the map increases slightly in between iterations, is this normal?
Yes, it is normal.
Hi once again me,
I am currently training a yolov3 (not spp , with 608x608 , 12 anchors and focal_loss =1, classes = 3) for 40 000 iterations.
The avg loss is currently about 0.55
The loss was at 0.6 at 10.000 iteration
Train images: - 20 million However i had the same issues with 50 000+ iterations on train images: 25.000
Following questions:
thanks in advance