Closed iperov closed 1 year ago
I guess I do not understand. This implementation does have a loss added for kl divergence using the mean and log_var / std dev, it is just added inside of the layer using Layer.add_loss().
Also, what is meant by resampler formula? reparameterization? I did find this difference:
return mean + K.exp(stddev) * epsilon
normally is this in other implementations:
return mean + K.exp(stddev/2) * epsilon
Anyway, thank you for contributing 😃 !!
In the bottom example, the stddev variable would represent log variance. Dividing it by two and putting it through the exp function simply converts log variance to standard deviation. If you want to use stddev instead of variance or log variance, it would just be return mean + stddev * epsilon
Thanks @Gregor0410, I corrected the output of the previous layer to be called logvar.
I checked various pytorch repos, all of them have loss for mean and log_var value, but your have not. Also resampler formula wrong in your repo.