Open tomerv opened 7 years ago
A correct accuracy calculation would be return np.mean((labels (prediction > th)) + (1-labels) (prediction < th)). Because TP=(labels (prediction > th)), TN=(1-labels) (prediction < th), in other words, (tp+tn)/total is accuracy.
but What should this ‘th’ value be?
This is wrong... it calculates a slice of the prediction array. It selects only the items where the prediction is < 0.5, and calculates the accuracy on those predictions. This gives a false (higher) accuracy calculation. A correct calculation would be
return 1 - np.mean((labels * (prediction > th)) + (1-labels) * (prediction < th))