p-lambda / verified_calibration

Calibration library and code for the paper: Verified Uncertainty Calibration. Ananya Kumar, Percy Liang, Tengyu Ma. NeurIPS 2019 (Spotlight).
MIT License
142 stars 20 forks source link

added wrapper function in utils.py for getting accuracy of bins #9

Open sdelcore opened 3 years ago

sdelcore commented 3 years ago

Related to #3

Hello! I've implemented a wrapper function in utils.py to get the accuracy of each bin given the prediction prob and labels along with a unit test. Please let me know if this is sufficient or if changes should be made

AnanyaKumar commented 3 years ago

Thanks for the PR, we really appreciate contributions to this project!

On line 339 the code takes the average of the labels, which may not be the same as the accuracy. For example, consider a test case where we have probs_labels = np.array([[0.0, 0], [0.0, 0], [0.0, 0]]). As per the documentation, the first number is the probability that the label is 1 (predicted by the model). So in this case the accuracy in this bin should be 100% because the model is very confident the label is not 1, and the label is in fact not 1 in any of these cases. I think the code will take the average of the 0s and output 0% accuracy.

The accuracy within a bin can be tricky to define if for example 0.5 is contained inside one of the bins. That is, if the model is calibrated we would normally classify all examples x where P(Y = 1 | x) > 0.5, as class 1. But what if one of the bins is [0.45, 0.55]? Are these examples classifier as 0 or 1? So I'm not sure if there's a right way to do this. It might need to depend on the application.

It might help if we understand what's your use case for this, and how we can improve the library for that?