aleximmer / Laplace

Laplace approximations for Deep Learning.
https://aleximmer.github.io/Laplace
MIT License
436 stars 63 forks source link

How to obtain the uncertainty scores per sample for a classification task? #131

Closed h-jia closed 11 months ago

h-jia commented 11 months ago

In the post-hoc Laplace on a large image classifier example, after optimizing via Laplace: la = Laplace(model, 'classification', subset_of_weights='last_layer', hessian_structure='kron') la.fit(train_loader) la.optimize_prior_precision(method='marglik')

The model is able to predict: probs_laplace = predict(test_loader, la, laplace=True)

But how to generate uncertainty scores for test data in per sample wise? e.g. sth like uncertainty, probs_laplace = predict(test_loader, la, laplace=True)?

wiseodd commented 11 months ago

The uncertainty is integrated already in probs_laplace, because it's defined as

$$ \int \mathrm{softmax}(f{\theta}(x)) \, \mathcal{N}(\theta \mid \theta\text{map}, H(\theta_\text{map})^{-1}) \, d\theta . $$

So, to measure per-test-point uncertainty, simply use some uncertainty metrics derived from probs_laplace. Couple examples:

confidences = probs_laplace.max(-1) 
entropies = torch.distributions.Categorical(probs_laplace).entropy()
h-jia commented 11 months ago

Thanks!