Open leyuan-sun opened 2 years ago
In your ICML presentation slides, you mentioned that DUQ is able to estimate aleatoric uncertainy, but in the paper 2.3, you said DUQ captures both aleatoric and epistemic uncertainty.
This is a bit tricky. We show in the DUQ paper that the model is able to capture some aleatoric uncertainty (but not disentangle it from epistemic!). I think the DDU paper does a really nice job of discussing epistemic and aleatoric uncertainty in deterministic uncertainty estimation: https://arxiv.org/abs/2102.11582
Comapred with "What Uncertainties Do We Need in Bayesian Deep Learning for Computer Vision?" by Alex Kendall in NeurIPS 2017, what is the advantage of DUQ?
The main problem with this method is that the uncertainty as characterized by the mean/std output and softmax output can differ.
Imagine one class has a large predicted mean value with a large std and the other classes have a small mean with small std. This implies a large amount of uncertainty, however when sampling from this distribution the resulting softmax output will be very certain (on the large mean class).
This is a bit odd and I am not sure if it's a desirable property of uncertainty estimation. Another good paper in that direction is https://arxiv.org/abs/2003.06778
Compared with using Gaussian Mixture Density Netwrok to estimate the stds(uncertainty), what is your advantage?
I am not entirely clear what you mean by this question, but these days I really like GMM likelihoods in feature space as uncertainty.
Thanks for your quick response, I would like to know what is the difference between GMM and DUQ to estimate uncertainty?
Comapred with "What Uncertainties Do We Need in Bayesian Deep Learning for Computer Vision?" by Alex Kendall in NeurIPS 2017, what is the advantage of DUQ?
The main problem with this method is that the uncertainty as characterized by the mean/std output and softmax output can differ.
What do you mean output can differ? Sorry could u explain a bit more?
I am not entirely clear what you mean by this question, but these days I really like GMM likelihoods in feature space as uncertainty.
Hi regarding GMMs can you recommend a paper that uses GMMs for uncertainty estimation?
In your ICML presentation slides, you mentioned that DUQ is able to estimate aleatoric uncertainy, but in the paper 2.3, you said
DUQ captures both aleatoric and epistemic uncertainty
.Comapred with
"What Uncertainties Do We Need in Bayesian Deep Learning for Computer Vision?" by Alex Kendall in NeurIPS 2017
, what is the advantage of DUQ?Compared with using
Gaussian Mixture Density Netwrok
to estimate the stds(uncertainty), what is your advantage?Thanks ahead!