Open ring-zl opened 7 months ago
The Soft NMS is applied in post-processing. It does not directly remove instances by IoU threshold but instead reduces the scores of overlapping instances. The threshold score is usually set empirically. A higher threshold will reduce the recall rate of detection, causing some instances that are accurately predicted but ranked lower in scores to be ignored, resulting in a decrease in mAP.
The Soft NMS is applied in post-processing. It does not directly remove instances by IoU threshold but instead reduces the scores of overlapping instances. The threshold score is usually set empirically. A higher threshold will reduce the recall rate of detection, causing some instances that are accurately predicted but ranked lower in scores to be ignored, resulting in a decrease in mAP.
So, when I need to locate actions in a video in practical applications, I still need a confidence threshold based on experience to handle such a large number of outputs, right?
If an experiment is conducted on a new dataset, we cannot know in advance what the optimal value of the threshold is.
Thank you, is setting a low confidence threshold considered a trick? What surprises me is that this rather extreme approach leads to a significant increase in the number of False Positives , but ultimately results in a noticeable improvement in the mAP metric。
The calculation of mAP is based on the P-R curve. Larger thresholds increase accuracy at low recall positions but decrease accuracy at high recall positions (possibly to 0), resulting in a decrease in the integrated result.
Thank you, that's a clear explanation。
"Why is the confidence threshold for NMS set quite low in the experiment, resulting in a significantly high number of prediction outputs? Is this practically feasible? Moreover, why does appropriately increasing the NMS confidence threshold (for instance, to 0.1 or 0.3) lead to a noticeable decrease in mAP?"