Open CarolinRue opened 1 month ago
Maybe you can use
TP = np.diag(confusion_matrix)
FP = np.sum(confusion_matrix, axis=0) - TP
FN = np.sum(confusion_matrix, axis=1) - TP
precision = TP / (TP + FP)
recall = TP / (TP + FN)
average_precision = np.mean(precision[:-1])
average_recall = np.mean(recall[:-1])
f1 = 2 * (average_precision * average_recall) / (average_precision + average_recall)
I cannot ensure accuracy
Hey,
I have a question of understanding. I did a one-class classification and for that trained a model with MMDetection.
In the case of one-class classification I only know the positive samples. Then I can only calculate TruePositive and FalseNegative and therefore only recall. I don't know FalsePositive (Predict positive when the actual value is negative) and TrueNegative (Predict negative when the actual value is negative) if I don't have a negative class. That's why I can't calculate Precision. Am I wrong?
But MMDetection calculates precision in the training. How is precision calculated if I do not have FalsePositive?
When I am printing the confusion matrix of the testing, FalsePositive is always 100 % and TrueNegative 0 %. How can FalsePositive be 100% if I have no negative samples? Does the background always count as negative?
Maybe I'm missing something, but I don't get it at the moment.