Open udion opened 6 years ago
Hey.
sorry, this was early code form the development process. And I think you are right, that this is not the loss described in the "What Uncertainties Do We Need..." paper.
However in training/combined_training.py is a more understandable implementation of the loss from equation 8 in the paper. (Mean is replaced with sum there).
Hope that helps, otherwise feel free to ask.
Hi,
So I was trying out the loss function as given in the paper (as you mentioned training/combined_training.py
) for one of my application. I have noticed that my log_sigma values reduces to 0 (sigma = 1), and my loss function reduces to original loss function (one without uncertainty), I was wondering what constraint stops network from learning log_sigma=1 ?
Any help appreciated.
Thanks
@hutec
In my application, I try to train the network with the above mentioned loss function, but my log_sigma
values keep dropping to zero, any clues?
Sorry, I'm currently busy. Spontaneously I have no clues. If I find time, I may look into it.
@hutec
I think the loss function (where you use monte carlo integration) is for the classification task (and not regression). It's mentioned in the paper. Although the maths is still not entirely clear to me.
@hutec
What are your thoughts on this? do you think the results can be improved, any suggestions?
https://udion.github.io/post/uncertain_deepl/
Thanks
Hey, great work @hutec.
I have a doubt though, in the paper "What Uncertainties Do We Need in Bayesian Deep Learning for Computer Vision?", the loss function doesn't invoke the
eps
andT = 20 # Number of Monte Carlo Integration steps for adding noise to the uncertainty
. Your loss function is not clear to me, could you please elaborate a little? or point to some resources which build this concept thoroughly and mathematically.Thanks