Open anakita opened 4 years ago
@AlexeyAB Update: I also checked the predictions manually , it gives very low accuracy for few classes,which was not the case with yolov3 Examples of images where it gets confused are- In the above image, the ground truth is tempo, but it labels it as car and tempo(both with at least 50% confidence) , and similar predictions happens in 50% of the images. Also in some images it detects the object and in some other images it misses out,For example in the below image it detects the cycle ,but does not detect in other images of same view.
Any suggestion and help is appreciated Thank you
Can you please give me some suggestion @AlexeyAB ? Thank you
If something doesn’t work for you, then show 2 screenshots:
If you do not get an answer for a long time, try to find the answer among Issues with a Solved label: https://github.com/AlexeyAB/darknet/issues?q=is%3Aopen+is%3Aissue+label%3ASolved
Hello great work on yolov4! I have 9 classes, with 17k images and I trained yolov4 for 22k iterations and below is the chart But when I use the yolo-obj_best.weights on test data I get a mAP of only 67% as opposed to the training mAP of at least 83% as shown in the chart above. But when I use the yolo-obj_3000.weights , I get a mAP of 73%.That is in 3000 iterations itself I got the best mAP on test data(compared to other weights). I have attached the screenshot of mAP which I got with yolo-obj_3000.weights.
The command used for calculating mAP on test data is -
I am not sure if to call it overfitting,but I trained only for minimum number of iterations.
Some examples from the training data are:
Any suggestion is appreciated. Thankyou