Closed gkwt closed 1 year ago
Hi @gkwt,
You seem to be already getting kl value from the model, can you try commenting out the get_kl_loss as below?
output, kl = model(datapoints)
#kl = get_kl_loss(model)
The problem persists even without the get_kl_loss
function. I should note that the values are the same as before. The backpropagation still does not change the KL value.
I have also tried this with LinearFlipout. It seems that the KL is not affected by the optimizer. After initialization of the model, I also added
for param in model.parameters():
param.requires_grad = True
to unfreeze the layers. But it had no effect on the training. KL remains constant
Hi @ranganathkrishnan,
There was a bug in my training loop. I was overwriting the model, and so the KL was not changing. Sorry for the confusion!
Hello,
I am trying to make a single layer BNN using the
LinearReparameterization
layer. I am unable to get it to give reasonable uncertainty estimates, so I started monitoring the KL term from the layers and noticed that it is not changing at all for each epoch. Even when I scale up the KL term in the loss, it remains unchanged.I am not sure if this is a bug, or if I am not doing the training correctly.
My model
and my training loop
When I print the KL loss, it starts at ~5.0 and does not decrease at all.