Closed sctrueew closed 5 years ago
Hi ALexeyAB,
Could you please give me some advice?
Thanks
I used "calculate anchore" too but it didn't make much difference. I'm going to use it on CPU and I'd rather use the Tiny.
Did you calculate anchors Before or After training?
How can I improve it?
Check mAP with flag -iou_thresh 0.9
what map do you get?
Hi @AlexeyAB ,
Thank you for your reply. I used calculate anchores before training. and I've checked the -iou_thresh 0.9 : mAP is 15.23 % , -iou_thresh 0.8 : mAP is 85.72 %
Can you attach your cfg-file?
Hi @AlexeyAB,
This is my cfg and I've added a new class that my number of pre classes was 8 and now is 9.
Add 2 lines to each of 3 [yolo]
layers, and train again or continue training
iou_normalizer=0.5
iou_loss=giou
And change ignore_thresh = .7
to ignore_thresh = .9
Thanks, I changed it now and it's training. Can I use the model in OpencvDnn with this changes?
Yes.
If something will go wrong in OpenCV-dnn, just comment these two lines iou_normalizer=0.5 iou_loss=giou
after training
Hi @AlexeyAB ,
Thank you very much for your suggestion. after the training I've checked the mAP again and the result is :
-iou_thresh 0.9 : mAP is 83.87 % , -iou_thresh 0.8 : mAP is 93.13 %
When can I change the iou_normalizer?, I think GIOU is better than IOU for me!? Can I use [iou_normalizer & iou_loss=giou] in SPP? Can I get higher mAP?
Thanks.
When can I change the iou_normalizer?, I think GIOU is better than IOU for me!?
iou_normalizer is something like learning_rate for width & height of object. You can try to reduce this value, if there are two cases together: some of bboxes larger than truth and some of bboxes smaller than truth.
Can I use [iou_normalizer & iou_loss=giou] in SPP?
yes
Can I get higher mAP?
yes.
Also I added some changes. So you can try to download new version of Darknet and train with it.
Hi ALexeyAB,
Thanks for the reply. I'll try it. And about "Can I get higher mAP?" How? Is there a way to increase the accurate?
Use yolov3-spp.cfg with GIoU. But it will work much slower.
And the last question☺. Can I use EfficientNet in OpenCV-dnn? What networks are supported in OpenCV?
Thanks a lot.
OpenCV-dnn supports: yolov3-tiny, yolov3-tiny_3l, yolov3, yolov3-spp.
You can create an Issue for supporting EfficientNet in OpenCV-dnn https://github.com/AlexeyAB/darknet/blob/master/cfg/enet-coco.cfg
Like this issue: https://github.com/opencv/opencv/issues/15724
Thanks @ALexeyAB, I close this issue.
@AlexeyAB @zpmmehrdad OpenCV-DNN now supports the PRN networks. It will be included in the next release of OpenCV 3.x and 4.x. :-) Check the ticket Alexey linked to! It's a super good network. But gotta wait a bit for OpenCV to release its next version... ;-)
@VideoPlayerCode hi,
Thanks for informing
@VideoPlayerCode Do the plan to support EfficientNet? https://github.com/AlexeyAB/darknet/blob/master/cfg/enet-coco.cfg
Hi@zpmmehrdad I want to ask how to find the anchor box of my data set. Thanks
@yrc08 Hi,
You can use this command: darknet.exe detector calc_anchors data/obj.data -num_of_clusters 9 -width 416 -height 416
@zpmmehrdad ,Hi Thank you very much for your quick answer,but I also want to ask if I can get the anchors value directly by the above command. Thanks!
@zpmmehrdad ,Hi Thank you very much for your quick answer,but I also want to ask if I can get the anchors value directly by the above command.
Yes, after run the command, a text file is created with called anchors.txt
@zpmmehrdad ,Hi Thank you for your reply.
Hi @AlexeyAB , I've trained a model that I want to identify the fields. The bounding box is not right when the field is identified.
Number of images: ~500 Classes: 8 My iteration: ~20K Model: Yolov3Tiny mAP: 99% IOU: 88%
Ground truth:
Detected:
I used "calculate anchore" too but it didn't make much difference. I'm going to use it on CPU and I'd rather use the Tiny. How can I improve it?
Thanks in advance.