zzh8829 / yolov3-tf2

YoloV3 Implemented in Tensorflow 2.0
MIT License
2.51k stars 908 forks source link

How is accuracy measured? #132

Open Robin2091 opened 4 years ago

Robin2091 commented 4 years ago

Hello,

I was wondering how the accuracy of the model is measured and how it differs from mAP. I am training with yolov3 tiny and I get only a 30 percent accuracy for "yolo_0_output".

I fixed my dataset, got more data and then I was able to get a 45 percent accuracy. However, throughout training, the accuracy value just fluctuated between 40-47 percent and I didn't see any significant improvement in the accuracy. I then retrained again, and I was back to a 30 percent accuracy for some reason. So I don't understand why my accuracy value is fluctuating so much and why I am getting different accuracies when I trained on 2 separate occasions with the same dataset. Also, I measured the mAP value using another repo and despite the differences in accuracy, I still get around the same mAP value of 65-68 percent.

If someone can shed some light on how accuracy is measured and why I am seeing fluctuating accuracies it would be really helpful.

Thank you

Edit: Just need to add that the first time I trained on my fix dataset(45 percent accuracy) I had to iou at 0.3 but the second time I trained I had the iou at 0.1. I dont know if this affects the accuracy of the model though.

zzh8829 commented 4 years ago

Did you use transfer learning? how are you measuring the accuracy?

Robin2091 commented 4 years ago

@zzh8829 Yes, I used transfer learning(darknet). On the modile. compile function i put metrics = ['accuracy']. I also tested the mAP with a different repo and I got an AP of 70 percent with an iou of 0.3. my code: image

imAhmadAsghar commented 4 years ago

@zzh8829 Yes, I used transfer learning(darknet). On the modile. compile function i put metrics = ['accuracy']. I also tested the mAP with a different repo and I got an AP of 70 percent with an iou of 0.3. my code: image

Hi, Can you please share from which other repo did you calculate the mAP and how? Did you use the trained weights or something?

Robin2091 commented 4 years ago

@asquare92 I don't believe the accuracy metric gives a good measure of the actual performance of the model. This is because Nonmax suppression is not applied during training. Also, I am not sure that the built-in accuracy metric does the same calculations as mAP. So I used a separate repo. Yes, I used the trained weights.

https://github.com/rafaelpadilla/Object-Detection-Metrics

imAhmadAsghar commented 4 years ago

@asquare92 I don't believe the accuracy metric gives a good measure of the actual performance of the model. This is because Nonmax suppression is not applied during training. Also, I am not sure that the built-in accuracy metric does the same calculations as mAP. So I used a separate repo.

https://github.com/rafaelpadilla/Object-Detection-Metrics

@Robin2091 Thanks. Just a small question: How did you make the detection files after training the model using this repo? And for the ground truth files, I just need to use my annotations?

Robin2091 commented 4 years ago

@asquare92 I loaded the model with the trained weights. I just looped through a set of images and ran the detection on the images. I saved the bounding box information in a text file. Yes, use your annotations as ground truth files.