Open Soda-Wong opened 6 years ago
How did you get such output for detector recall
with Precision:100%
?
There isn't Precision param: https://github.com/AlexeyAB/darknet/blob/0c95d8dfac7869ae4258f2080950cae6ea6a647e/src/detector.c#L487
Try to revert your changes, update your code from github, recompile and do detector recall
again.
@AlexeyAB I change this code to:
fprintf(stderr, "Number:%5d Correct:%5d Total:%5d Proposals:%5d\tRPs/Img: %.2f\tIOU: %.2f%%\tRecall:%.2f%%\tPrecision:%.2f%%\n", i, correct, total, proposals, (float)proposals / (i + 1), avg_iou * 100 / total, 100.*correct / total, 100.*correct / proposals);
I recompile it and do detector recall
again
and get the same results:darknet.exe has been stopped.
@Soda-Wong
Try to run detector recall
without your changes.
May be there is divide by zero
in this expression 100.*correct / proposals
if one of image dosn't have objects and proposals==0
.
I just tested recall
on my dataset and it works fine:
@AlexeyAB I have no idea because the the print proposal is not zero in that screenshot.
But I find something strange that the recall of yolov3-voc_100.weights and yolov3-voc_200.weights is normal, they does well. But when weights goes to 300 and more, there is the same error.
@AlexeyAB I printf the every IOU in the bounding boxes of the image, and it goes wrong.
For the same image, sometimes when k=10
, sometimes k=others
@Soda-Wong There was a bug, that is already fixed at Apr 2, 2018 (13 days ago) - so update your code: https://github.com/AlexeyAB/darknet/commit/726cebd3fb67d65ec6d2d49fa6bfba4c053085df#diff-d77fa1db75cc45114696de9b1c005b26R474
Use for (k = 0; k < nboxes; ++k) {
instead of for (k = 0; k < l.w*l.h*l.n; ++k) {
@AlexeyAB thx!! I wonder what's the difference between nboxes
and l.w*l.h*l.n
when I compare the YOLOv2 and YOLOv3.
But I find the iou, recall and precision is too high to make me think about whether the model is overfitting.
l.w*l.h*l.n
is only for 1 detection-layer l
that is used in Yolo v2.
nboxes
is for all 3 detection-layers that are used in Yolo v3.
Also from nboxes
are removed items with prob < thresh
: https://github.com/AlexeyAB/darknet/blob/5c1e8e3f48343d8944af1195e21f6f3b53ed848e/src/yolo_layer.c#L368
This is a good mAP. I think there is no overfitting.
To check is there an overfitting, you should collect images that weren't used for training, you should label them, put paths to them to the file valid.txt
, set param valid=valid.txt
in the obj.data
file, and do detector map
for different weights-file (for different iteration number).
If weights-file with higher iteration number gives lower mAP on validation dataset - then this is overfitting. Else there is no overfitting.
@AlexeyAB
@AlexeyAB
why appear this results?due to datasets?
@ss199302
./darknet detector map
...?Hi @AlexeyAB
I have a problem in recall.
I trained my own dataset with yolo-voc.cfg and get correct model results.
I can obtain correct result by using those trained models.But when I run ./darknet detector recall
I get nan error
can you help me
@Caroline1994 Try to update your code from GitHub. And what mAP can you get?
I have a problem in recall. I trained my own dataset and get the model like this.
I don't know why others is 0KB, but I test yolov3-voc_38500.weights and get a great results. So I wanna check the recall of this model.
darknet.exe detector recall data/voc_wwj.data yolov3-voc.cfg backup/sixclass/yolov3-voc_38500.weights
And when I run to this step:There is a problem: darknet.exe has been stopped.
I check the map, and the result is much better than YOLOv2:
So I wonder what's the problem if it is not my model fault? Can anyone help me? @AlexeyAB