AlexeyAB / darknet

YOLOv4 / Scaled-YOLOv4 / YOLO - Neural Networks for Object Detection (Windows and Linux version of Darknet )
http://pjreddie.com/darknet/
Other
21.65k stars 7.96k forks source link

How to improve tiny-yolov3 detections #6364

Open ninenerd opened 4 years ago

ninenerd commented 4 years ago

Model is around 90% accuracy for the important label but is stuck there. Problem is that model out of nowhere is missing some obvious detection and sometimes falsely detecting it.
Is it normal, how do know that model is at saturation level, after all tiny yolo has limited number of paramter to optimise. Will using custom anchors help ?
Any other changes

YCAyca commented 4 years ago

The 2 most important things which effect the accuracy is to annotate dataset %100 accurately and initialize the weights in an efficient way. To improve my accuracy I do generally these steps :

After doing these optimizations I check my model performance with different test thresholds using my updated code where you can reach from https://github.com/YCAyca/YCA_VG_AlexeyAB_darknet and see your model performance using different thresholds at one time to find best match.

ninenerd commented 4 years ago

@YCAyca 1. As per I remember I was using yolov3-tiny.conv.15 to train tiny yolo with pre-trained weights. Recently alexeyab made changes to repo for yolov4.
Now as you mention yolov3-tiny.conv.11 is there. is the one freezing less layers is correct (11). Could this be the issue ?

  1. Earlier my data had poor images as well, i removed them but I have not annotated all objects only quality ones so that model can avoid false detection.
YCAyca commented 4 years ago

If you use an image in your dataset on which you see the object, so you have to annotate it for make the model see too. In other case, you might give the positive samples as negative samples and this may occure a reduction on your accuracy. On the other hand, you train with how many images and how many iterations do you do? To keep continue to train by increasing the iteration number may help. I dont think the reason is that you use .14 as pretrained weight its okey too

ninenerd commented 4 years ago
  1. I have around 23k images and i put for over 500k iterations, Most time i get best weights around 230k.
  2. For the uploaded images, there are so many people in it, annotating all will result in poor learning as well as facial features are not clear in all. Are you pointing towards annotating all because it affects loss function. Example- In image suppose 20 people are there, but i annotate only 10 quality ones, model might predict 15 in the image and will result in loss.
    My point is annotating all will result in some wrong detections, specially the ones which are very small in size. Is this right ? kljlhk