AlexeyAB / darknet

YOLOv4 / Scaled-YOLOv4 / YOLO - Neural Networks for Object Detection (Windows and Linux version of Darknet )
http://pjreddie.com/darknet/
Other
21.68k stars 7.96k forks source link

Regarding accuracy and Tiny-yolo #3219

Open sudharavali opened 5 years ago

sudharavali commented 5 years ago

Hello @AlexeyAB,

Thank you for your great work ! I used tiny-yolo for object detection of my custom dataset for only one object. Now, to say how accurate the model is, do I give it a test dataset and calculate the average of the bounding box scores or is the MAP value a better indicator. Can you please tell me what we should choose and why, I am a little confused. I would like to add that for a few cases, the data in my test dataset is looks very different than what i trained it on (which I have done on purpose). Should I change any values in obj.data ?

Also, do you think we can improve the accuracy if I used YOLO instead of Tiny-yolo ? Please let me know !

Thanks in advance.

AlexeyAB commented 5 years ago

@sudharavali Hi,

The mAP is all you need.

Also, do you think we can improve the accuracy if I used YOLO instead of Tiny-yolo ?

Yes.

sudharavali commented 5 years ago

Thank you for your quick reply ! I just have one follow up question, When I train on one dataset and want to check it's mAP score on another set of data, do I have to change any values in the obj.data file ?

To elaborate, I am training a model to detect an object in a simulated environment and want to see how it will preform on the real samples using the mAP values. Please let me know how to do it.

Thank you in advance !!

AlexeyAB commented 5 years ago

I just have one follow up question, When I train on one dataset and want to check it's mAP score on another set of data, do I have to change any values in the obj.data file ?

Just use in the obj.data files

train=one_dataset.txt
valid=another_dataset.txt

To elaborate, I am training a model to detect an object in a simulated environment and want to see how it will preform on the real samples using the mAP values. Please let me know how to do it.

Jut use two datasets one_dataset.txt and another_dataset.txt as I described above. Can you show examples of images from training and testing datasets?

sudharavali commented 5 years ago

Thanks alot for your prompt reply. Ofcourse I can show you the images :

So for one detection model, the training images look like the following :

Drone_scene_1002

Drone_scene_1446

For another detection model with another dataset, the training images look like the following :

B247

B355

I am trying to evaluate its accuracy and mAP on the images that look this way :

create100_original_312 jpg_fdd33f35-4969-4338-bd25-274e7609f1db

create100_original_356 jpg_1f6c4835-8e63-48f9-bb92-6c8ce8f7e93f

something

I have one question pertaining to this : for : valid =another_dataset.txt , should it be 80 % of the dataset like the train dataset or 20% of the dataset like any other validation set ?

Thank in advance !

AlexeyAB commented 5 years ago

valid =another_dataset.txt can be any dataset that doesn't intersect with training-dataset.

sudharavali commented 5 years ago

Thanks alot for your help ! After doing what you asked me to, these are my values observed for one detection model :

Validation Set : mAP score : 38 %, Bounding Box confidence score (average) : 49% Test Set (on the real samples ) : mAP score : 98.4 %, Bounding Box confidence score (average) : 4%

Can you tell me how to interpret this ? Can you tell why the bounding box confidence is so low when the mAP is so high for the test dataset.

Thank you again in advance !

AlexeyAB commented 5 years ago

The confidence score is not important.

You need only for mAP.

Can you tell why the bounding box confidence is so low when the mAP is so high for the test dataset.

What confidence threshold did you use to calculate confidence score? By default =0.25, so average value can't be less than 25%.

sudharavali commented 5 years ago

What confidence threshold did you use to calculate confidence score?

For 200 test images, the model didnt draw a bounding box on a majority of them. So I just did the sum of the individual confidence scores(many of which were of 0 value) / and divided it by total number of images (200 in this case) . That is how I calculated and got 4%.

By default =0.25, so average value can't be less than 25%

Can you tell me where to find this value? Is this part of the config file ?

AlexeyAB commented 5 years ago

-thresh 0.25 flag: https://github.com/AlexeyAB/darknet#how-to-use-on-the-command-line

./darknet detector test cfg/coco.data cfg/yolov3.cfg yolov3.weights -thresh 0.25

sudharavali commented 5 years ago

I see that the threshold is set to 25% for images. But I don't get drone accuracy for few images. Please see the result attached below.

Alexey