Open dnaveenr opened 6 years ago
Thanks a lot for the quick reply. When I run ./darknet detector map command on the training dataset. I get this :
for thresh = 0.25, precision = -nan, recall = 0.00, F1-score = -nan for thresh = 0.25, TP = 0, FP = 0, FN = 6132, average IoU = -nan % mean average precision (mAP) = 0.000000, or 0.00 %
But during training, I didn't notice any nan's. The dataset seems to be in the right format, but anyway when check using Yolo_mark and get back.
Can the dataset have filenames like image- (9).jpg/txt , image - (1).jpg/txt etc, because the 300 class dataset contains filenames of this format. Would filename be an issue ? I didn't get any errors while training though.
mAP is bad.
Can you show your obj.data
file and command for training?
Can the dataset have filenames like image- (9).jpg/txt , image - (1).jpg/txt etc,
Image filename can contain (9)
But image filename can't contain images
or JPEGImages
or raw
: https://github.com/AlexeyAB/darknet/blob/c2c8595b083ec3586a99bb913b8a986e81e3a42a/src/data.c#L299-L302
Yes. It is bad. Okay, so filename is not an issue. Content of obj.data is :
classes = 313 train = data/train4.txt valid = data/train4.txt names = data/obj.names backup = backup/
Command used for training is :
./darknet detector train data/obj.data cfg/yolov2-tiny-obj.cfg yolov2-tiny-voc.conv.13
I have changed yolov2-tiny-obj.cfg file appropriately( i,e classes and filters ).
batch=64 subdivision=8
in cfg-file?Yes, I used batch = 64 and subdivisions = 8 in the cfg file. I didn't make any changes. Just used the same parameters as in yolov2-tiny-voc.cfg
I just checked my dataset annotations. They are perfect.
Also there were few extra annotation files (.txt) files without images in the dataset folder I noticed. [ Eg : image7.txt is present, but there is no image7.jpg] I don't think this should be an issue since in train.txt we provide all image paths.
Okay. I removed the extra annotation files in the dataset folder and retrained the model. After 1k iterations, I tested with -thresh 0.01 . I see some predictions now. So I think this was the issue.
Thanks for your help. :)
Hi@AlexeyAB, is there a performance(accuracy) comparison between yolov3 and yolo-9000 if training with custom objects? it seems that there is no training tutorial on yolo-9000
I didn't train yolov3 for large number of object. And I didn't compare them. I think, for classes < 100 the yolov3 much more accurately.
is it possible to add resnet/densenet to yolov3 just as in yolov2? is it necessary? thank you
I have trained Tiny-YOLOV2 for more than 300 classes for custom dataset but when I test, I see no detections on the test data after more than 20k,30k iterations. I tried reducing the threshold( -thresh 0.01) but still no detections. I had tried a similar test for 100 classes and results were quite good. Is there a limit on the maximum number of classes YOLOv2 can detect ? In abstract of YOLO9000 paper, I see this :
Does this mean YOLOv2 can detect a maximum of 200 classes only ? Please help.