Open ghost opened 3 years ago
initial weight "influence" is lost after every iteration, you could start training from random weight (by providing no weights file) and get the same final mAp (but probably after more iter). Initial weights allow faster convergence, fewer chances of crash and possibly better result.
I wasn't paying attention and I trained yolov4-csp with yolov4 pre-trained weights yolov4.conv.137 and I got
mAP@0.5=52%
which is acceptable with my dataset (its a pretty challenging dataset) and when I noticed my mistake I got excited and thought surely if I change the pre-trained weights to yolov4-csp.conv.142 I'll get better results. With exactly the same cfg file I gotmAP@0.5=6%
I'm very confused because I was basically using the wrong values and when I corrected them I got worse results. Can someone explain this phenomenon please ?