Closed Tuyki closed 3 years ago
Hey! Sorry for the late reply. If you want to calibrate all the predictions you should use the marginal calibrators, for example PlattBinnerMarginalCalibrator. That'll calibrate all the probabilities. I'll hopefully add some more versions soon too!
I think that num_calibration is a relic from an older version - and a lot of the experiment code used it so I didn't remove it. Sorry about that.
Hi,
First I really appreciate the repository. Awesome work!
I noticed that the "top" calibrators, such as HistogramTop, PlattBinnerTop, etc., produce only the calibrated probabilities of the top label. I'm not sure how I can adjust the probabilities of the other classes in a multi-class task. Say I have originally a probabilistic prediction [0.1, 0.8, 0.05, 0.05] and the top-calibrator only adjusts 0.8 to 0.6. Should I distribute the 0.2 uniformly onto the other 3 classes? In some cases this might change the decision no? (I would need the complete distribution to calculate, e.g., ECE score etc.)
Another question: I also saw that the calibrators require a
num_calibration
argument which doesn't seem to play any role. What's the reason for that?Thanks and best regards, T