statsu1990 / monte_carlo_dropout

Uncertainty estimation in deep learning using monte carlo dropout with keras
MIT License
8 stars 3 forks source link

Interpretation of the Predictive Uncertainty for descision making #2

Open akashmondal1810 opened 4 years ago

akashmondal1810 commented 4 years ago

Hello, I have modified the code binary classification. Is there any way to interpret the obtained Predictive Uncertainty? After computing the predictive variance i.e. the sample variance of T stochastic forward passes is there any way to calculate any threshold or cutoff value so that if the predictive variance is above that value we can say that the model is uncertain or below which it is certain about its prediction? Uncertain if (predictive variance>=threshold) || Certain if (predictive variance<threshold) How to compute this threshold! Thanks!

statsu1990 commented 4 years ago

Hello! Thank you for asking the question.

In my opinion, the method of determining whether to be uncertain or certain needs to be determined based on the purpose There is. Therefore I do not have a general answer.

For example, how about using the following procedure to determine uncertainty or certainty?

Calculate the output multiple times with Monte Carlo dropouts to create a histogram. This histogram is a probability distribution of the model output values. By using this probability distribution, we calculate the probability that the model will miss its prediction. If this probability is within your acceptable range, judge it as certain, otherwise judge it as uncertain. (For example, if the probability that the output is within the mean ± 0.1 is greater than or equal to 80%, we judge it as certain.)