Closed anishmuthali closed 3 years ago
Hi,
sorry for the late response and the inconvenience.
To me this looks likely to be an issue occurring during visualization, since in the upper half of the image you already see (very vaguely) that our uncertainty approximation has similar form as the sampling-based uncertainty. The scaling is a bit off, but that is probably due to the artifacts in the lower half of the image.
I did not retrain with bn_train=True, but I tried the method with a randomly initialized model and the resulting pattern of MC dropout and our sampling-free approach look similar. Same goes for our pretrained model. Unfortunately, I was not able to reproduce this issue thus far.
I updated the repository with my environment specification (sampling_free_env.yml). If you use conda, it would be great if you could try again with this env to make sure its not an environment issue.
Keep me posted!
Best, Janis
Hi,
Thank you very much for your help. I believe it was related to the environment set up so I used your conda env and it works now. Closing this issue
I'm seeing these results when running the uncertainty propagator in the Bayesian SegNet:
The Monte Carlo uncertainty looks fine but for some reason the uncertainty propagation method doesn't seem to be working. I am running the code as default except I have set
bn_train = true
when initializing the model since this has been giving better accuracy.The image above is from an example on the test set. If I view an example from the train set, I see that
unc_our
is just a black image.This is after training but I'm getting similar results when using the pretrained model weights on the Google Drive folder