Open YoungseokOh opened 2 months ago
For each ground truth, the algorithm attempts to find a matching prediction. If no prediction matches the ground truth (indicating that the model failed to detect the object), the algorithm appends a 0 to the true_positive_list
.
When calculating precision and recall, a threshold is used to divide the true_positive_list
into true positives and false negatives:
false_negatives = bisect.bisect_left(true_positive_list, thresh)
true_positives = len(true_positive_list) - false_negatives
As long as the threshold is greater than 0, these appended 0s in the true_positive_list
will always be treated as false negatives, which is the correct behavior. Ignoring them would result in an incorrect false negative count, leading to underestimation.
Hi,
I believe there's an issue with the get_confidence_list() function.
When I used your pre-trained model, I couldn't achieve the same performance metrics as you reported.
I think the else clause should be removed because it calculates values that should not be considered.
The function should only handle cases where the prediction is a true positive and matches well with the ground truth (GT).
If a prediction does not match well with a ground truth, it should not append 0 to the same list (true_positive_list).
Appending 0 incorrectly includes non-matching predictions in the calculation, which creates problems when calculating precision and recall.
I will wait your reply.
Thanks