EFS-OpenSource / calibration-framework

The net:cal calibration framework is a Python 3 library for measuring and mitigating miscalibration of uncertainty estimates, e.g., by a neural network.
https://efs-opensource.github.io/calibration-framework/
Apache License 2.0
347 stars 42 forks source link

TemperatureScaling and LogisticCalibration do not work correctly #61

Open salokin1997 opened 2 months ago

salokin1997 commented 2 months ago

Hello, thank you for this greate toolbox. However, I have the same problem as described here (https://github.com/EFS-OpenSource/calibration-framework/issues/48#issue-1964505988). This is mainly due to the method for the calibration mapping. For method=mle the ECE (and the other metrics) is equal to the uncalibrated ECE. This applies to both TemperatureScaling and LogisticCalibration. If you change the method to mcmc, the problem no longer occurs. I am currently using version 1.3.6 of netcal. Below is an example code based on your example and the ReadMe:

import numpy as np
from netcal.metrics import ECE
from netcal.scaling import TemperatureScaling, LogisticCalibration
from sklearn.model_selection import train_test_split

# load data
input = np.load("records/cifar100/wideresnet-16-4-cifar-100.npz")
predictions = input['predictions']
ground_truth = input['ground_truth']

# split data set into build set and validation set
pred_train, pred_val, lbl_train, lbl_val = train_test_split(predictions, ground_truth,
                                                            test_size=0.7,
                                                            stratify=ground_truth,
                                                            random_state=None)

# apply TS
temperature = TemperatureScaling(detection=False, use_cuda=True, method='mle')
temperature.fit(pred_train, lbl_train)
calibrated_ts = temperature.transform(pred_val)

apply LR
lr = LogisticCalibration(detection=False, use_cuda=True, method='mle')
lr.fit(pred_train, lbl_train)
calibrated_lr = lr.transform(pred_val)

# Evaluate
n_bins = 10
ece = ECE(n_bins)
uncalibrated_score = ece.measure(pred_val, lbl_val)
calibrated_score_ts = ece.measure(calibrated_ts, lbl_val)
calibrated_score_lr = ece.measure(calibrated_lr, lbl_val)

print(f'uncalibrated ECE: {uncalibrated_score}')
print(f'calibrated ECE with TS: {calibrated_score_ts}')
print(f'calibrated ECE with LR: {calibrated_score_lr}')

The output:

uncalibrated ECE: 0.05723183405505762
calibrated ECE with TS: 0.05723081579165799
calibrated ECE with LR: 0.05723081579165799

using mcmc instead of mle:

Sample: 100%|██████████| 350/350 [00:03, 96.51it/s, step size=7.82e-01, acc. prob=0.944] 
Sample: 100%|██████████| 350/350 [04:27,  1.31it/s, step size=3.79e-02, acc. prob=0.985]
uncalibrated ECE: 0.058336610062313915
calibrated ECE with TS: 0.039346123378723855
calibrated ECE with LR: 0.03517598363437825