experiencor / keras-yolo2

Easy training on custom dataset. Various backends (MobileNet and SqueezeNet) supported. A YOLO demo to detect raccoon run entirely in brower is accessible at https://git.io/vF7vI (not on Windows).
MIT License
1.73k stars 784 forks source link

Anchor boxes dimensions #272

Open titanbender opened 6 years ago

titanbender commented 6 years ago

Based on my dataset, I found a surprising result. My dataset consist of 1300+ xml boundary boxes of canine hips. Each hip is annotated as a perfect square. However, when I use the gen_anchor.py the results are the following:

average IOU for 5 anchors: 0.91 anchors: [2.55,2.07, 3.10,2.51, 3.58,2.86, 4.22,3.19, 5.16,4.09]

Why are the anchors not square and given that how am I able to get a mAP of 1.00 if my anchors doesnt reflect the ratio dimensions of the ground truth boxes?

Thanks, Johan

eslambakr commented 6 years ago

1- the anchors aren't square because we get it from k-maens so by chance may one of the 5 anchor become a square and may not, so don't worry.

2-because while training, the model train to fit the labels not the anchors the anchors technique speed up the training and increase the accuracy.

titanbender commented 6 years ago

@eslambakr thanks for the quick answer! I thought the anchors were permanent. Does this mean that an anchor with the value of 1.00, 2.00 is the same as 2.00, 4.00 as it's only the ratio that is important?

eslambakr commented 6 years ago

if you read the code of generate anchors you will notice that the size of the photos is important if the ratio only is the important, the size of image will not be used, But it isn't the case so the value of the anchors it self are important i hope i clarify it well