gpleiss / temperature_scaling

A simple way to calibrate your neural network.
MIT License
1.09k stars 159 forks source link

ECE Increasing #29

Open austinmw opened 2 years ago

austinmw commented 2 years ago

Hi,

I ran this with a very simple 10 layer CNN model I trained on MNIST using pytorch lightning.

orig_model = pl_module.model
val_loader = trainer.datamodule.val_dataloader()
scaled_model = ModelWithTemperature(orig_model)
scaled_model.set_temperature(val_loader)

But the ECE ends up increasing instead of decreasing:

Before temperature - NLL: 0.645, ECE: 0.271 Optimal temperature: 1.229 After temperature - NLL: 0.779, ECE: 0.351

Any idea why this could be?

Liel-leman commented 2 years ago

same for me : Before temperature - NLL: 0.058, ECE: 0.002 Optimal temperature: 1.316 After temperature - NLL: 0.061, ECE: 0.010

dwil2444 commented 2 years ago

Check if Model output is logits vector or softmax probs @NoSleepDeveloper @austinmw

RobbenRibery commented 2 years ago

same applies for me, the model is output logit vector, not softmax

zhangyx0417 commented 1 year ago

I'm wondering if I could use ECE as optimization goal rather than NLL, if the overhead is not large? (Since there is problem above)

RobbenRibery commented 1 year ago

I don't think ECE is differenable bro

RobbenRibery commented 1 year ago

But that being siad, NLL is the metric that we should minise in order to make P(Y=y^|y^=f(x)) = f(x) [perfectly calibrated model, you may think the output probs follow a categorical distribution paramertirsed by f(x) ]

tomgwasira commented 1 year ago

Try increasing the learning rate or increasing max_iter. Your optimisation needs to converge. In the __init__ function of ModelWithTemperature create an empty list to store the loss i.e.

self.loss = []

then before return loss in the eval function, append loss to the list

self.loss.append(loss.item())

After your call to set_temperature, plot the values in the self.loss list and see if the loss was minimised. The loss curve should taper off to some value that's somewhat constant after convergence.

MengyuanChen21 commented 11 months ago

After the optimization has converged, I still fail to get decreasing ECE.

I wonder, is it possible for us to get the optimal temperature by optimizing NLL loss on the validation set? I think it is a little strange.