david8862 / keras-YOLOv3-model-set

end-to-end YOLOv4/v3/v2 object detection pipeline, implemented on tf.keras with different technologies
MIT License
639 stars 220 forks source link

How to reduce false negatives in output? #21

Open govindamagrawal opened 4 years ago

govindamagrawal commented 4 years ago

Hi David, I am using tiny darknet yolov3 for training. When I am training on my own dataset, I am getting lot of false negative bounding boxes, while predicting using yolo.py. Which parameter needs to be changed to increase the thresholding?

david8862 commented 4 years ago

Hi David, I am using tiny darknet yolov3 for training. When I am training on my own dataset, I am getting lot of false negative bounding boxes, while predicting using yolo.py. Which parameter needs to be changed to increase the thresholding?

by default, you can use param "confidence" in yolo3_postprocess_np to control predict bound box level. It's a prediction score threshold, so when you raise this param, more bboxes with low score (generally should be False prediction) will be filtered.

farhodbekshamsiyev commented 4 years ago

Hi David, I am using tiny darknet yolov3 for training. When I am training on my own dataset, I am getting lot of false negative bounding boxes, while predicting using yolo.py. Which parameter needs to be changed to increase the thresholding?

by default, you can use param "confidence" in yolo3_postprocess_np to control predict bound box level. It's a prediction score threshold, so when you raise this param, more bboxes with low score (generally should be False prediction) will be filtered.

Hi @david8862 Can you suggest some value for param "confidence" in yolo3_postprocess_np.py to get better results? Now I am using 0.5! it has passed 10 hours and 13/50 epoch! I wish I get good results!

david8862 commented 4 years ago

Hi @farhodbekshamsiyev , in my experience, the default 0.1 confidence is usually a good balance between precision and recall for a well trained model and could get more reasonable detection result. If you lower it down (e.g. to 0.001), mAP will usually be better but precision will significantly drop down. If you raise it up, false positive detections will be more likely to be filtered but the recall rate will be impact

farhodbekshamsiyev commented 4 years ago

Hi @farhodbekshamsiyev , in my experience, the default 0.1 confidence is usually a good balance between precision and recall for a well trained model and could get more reasonable detection result. If you lower it down (e.g. to 0.001), mAP will usually be better but precision will significantly drop down. If you raise it up, false positive detections will be more likely to be filtered but the recall rate will be impact

Thank you very much!!! I keep in mind this infos!