Open pbalaji98 opened 5 years ago
@pbalaji98 Hi,
. I didn’t use YOLO_Mark to generate my annotation files (I wrote a script to do it instead). To ensure that my boxes were correct, I drew the boxes and the centers of those boxwa
-show_imgs
?but with the VM I’m using to train, I can’t use OpenCV.
-dont_show
Do you have suggestions on how to improve the mAP?
Use yolov3-spp.cfg instead of yolov3.cfg Use default anchors. Train at least 4000 iterations. And use subdivisions=32 or 16 if you can.
@AlexeyAB Hello.
The circles are what I drew to denote the center of the boxes, just to make sure that I was annotating correctly.
I can't compile with OpenCV=1 due to not have the OpenCV package. The VM doesn't carry it. I can try seeing if I can install it.
Why would yolov3-spp.cfg be better? Could you explain requesting me to use default anchors and training to at least 4000 iterations?
Bump
@pbalaji98 Hi,
Check your annotations by using Yolo_mark.
yolov3-spp.cfg is the best model currently.
But you can try to use new models:
https://github.com/AlexeyAB/darknet/files/3253820/yolo_v3_spp_pan_scale.cfg.txt
https://github.com/AlexeyAB/darknet/issues/3114#issuecomment-494148968
In many cases it's better to use default anchors, because your re-calculated anchors are not suitable for correspond [yolo]-layers: https://github.com/AlexeyAB/darknet#how-to-improve-object-detection
recalculate anchors for your dataset for width and height from cfg-file: darknet.exe detector calc_anchors data/obj.data -num_of_clusters 9 -width 416 -height 416 then set the same 9 anchors in each of 3 [yolo]-layers in your cfg-file. But you should change indexes of anchors masks= for each [yolo]-layer, so that 1st-[yolo]-layer has anchors larger than 60x60, 2nd larger than 30x30, 3rd remaining. Also you should change the filters=(classes + 5)*
before each [yolo]-layer. If many of the calculated anchors do not fit under the appropriate layers - then just try using all the default anchors.
Hi Alexey,
What about yolov3-tiny anchor recalculation? What are the recommendations for that? (two scale, which sizes etc.). Thanks.
@AlexeyAB. I used Yolo_mark to check a few of annotations and they seem correct. I also trained with the yolov3-spp.cfg file and didn't get better results. After 4000 iterations, the highest mAP score was %70. The mAP began to fluctuate after 1800 iterations.
Do you have any other suggestions? The fluctuation of the mAP score is very drastic (drop from %66 to %30 in consecutive mAP calculations, for example)
@pbalaji98 Can you show Loss & mAP chart?
Hi @AlexeyAB
In many cases it's better to use default anchors, because your re-calculated anchors are not suitable for correspond [yolo]-layers: https://github.com/AlexeyAB/darknet#how-to-improve-object-detection
recalculate anchors for your dataset for width and height from cfg-file: darknet.exe detector calc_anchors data/obj.data -num_of_clusters 9 -width 416 -height 416 then set the same 9 anchors in each of 3 [yolo]-layers in your cfg-file. But you should change indexes of anchors masks= for each [yolo]-layer, so that 1st-[yolo]-layer has anchors larger than 60x60, 2nd larger than 30x30, 3rd remaining. Also you should change the filters=(classes + 5)* before each [yolo]-layer. If many of the calculated anchors do not fit under the appropriate layers - then just try using all the default anchors.
Best
Hello,
I am trying to train my network on some images and only reach a mAP value of around ~ 73%. Here are the details.
Here are the images and what the boxing should look like. I didn’t use YOLO_Mark to generate my annotation files (I wrote a script to do it instead). To ensure that my boxes were correct, I drew the boxes and the centers of those boxwa
Below is my cfg file. I apologize for not being to share any images like the cluster image or the loss chart, but with the VM I’m using to train, I can’t use OpenCV. Do you have suggestions on how to improve the mAP?
yolo_stream.txt