ykwon0407 / UQ_BNN

Uncertainty quantification using Bayesian neural networks in classification (MIDL 2018, CSDA)
132 stars 21 forks source link

a discussion about the inference method #6

Closed ShellingFord221 closed 5 years ago

ShellingFord221 commented 5 years ago

Hi, In your code, you use MC dropout when inference a new input's outputs. But recently I read another paper Bayesian Convolutional Neural Networks with Variational Inference, the author uses local reparameterization trick for convolutional layer to sample in inference (see line 132-146 in https://github.com/felix-laumann/Bayesian_CNN/blob/master/utils/BBBlayers.py), i.e. output = mean + std * (random(mean)), where (I think) w is drawn from N(mean, std). I can't tell which method is better for sampling, if we can have a discussion about this, I'll be very very thankful!! (By the way, he also introduces aleatoric and epistemic uncertainty in his work:) )

ykwon0407 commented 5 years ago

@ShellingFord221 I guess this topic is interesting but a bit apart from my GitHub and my work. Please note that the proposed method is the way to quantify uncertainty, not the way to model variational distribution. But I would like to discuss it! Could you send me an email to ykwon0407_at_snu.ac.kr?

ShellingFord221 commented 5 years ago

Sure!