kasparmartens / NeuralProcesses

Neural Processes implementation for 1D regression
65 stars 11 forks source link

KL-divergence sign error #1

Closed wohlert closed 4 years ago

wohlert commented 6 years ago

I believe there is a mistake in the way that you calculate the KL-divergence. The log term should be subtracted instead of added.

KLqp_gaussian <- function(mu_q, sigma_q, mu_p, sigma_p){
  sigma2_q <- tf$square(sigma_q) + 1e-16
  sigma2_p <- tf$square(sigma_p) + 1e-16
  temp <- sigma2_q / sigma2_p + tf$square(mu_q - mu_p) / sigma2_p - 1.0 - tf$log(sigma2_p / sigma2_q + 1e-16)
  0.5 * tf$reduce_sum(temp)
}
kasparmartens commented 6 years ago

Hi! I am not sure this is the case. For example, when following https://stats.stackexchange.com/a/7449 then the log-term should be log(sigma2_p / sigma2_q) which could also be written as -1*log(sigma2_q / sigma2_p)?

wohlert commented 6 years ago

It is true that the identity works like that. However, when I use Tensorflow's built in KL-divergence I get a different value. https://github.com/tensorflow/tensorflow/blob/cfebbbc94f3edd1622a9a42379dd2ccc956ea52c/tensorflow/python/ops/distributions/normal.py#L277

kasparmartens commented 6 years ago

When I look at that tensorflow function, it uses notation "ratio = sigma2_q / sigma2_p" and as a result it contains -1*log(ratio) which I believe matches to my code?

I also did a quick numerical test:

library(tensorflow)
mu_p <- 1.5
sigma_p <- 0.4
mu_q <- 2.0
sigma_q <- 0.7
p <- tf$distributions$Normal(mu_p, sigma_p)
q <- tf$distributions$Normal(mu_q, sigma_q)

sess <- tf$Session()
sess$run(q$kl_divergence(p))
# 1.252884
sess$run(KLqp_gaussian(mu_q, sigma_q, mu_p, sigma_p))
# 1.252884
neighthan commented 6 years ago

I can also confirm that Kaspar's implementation matches PyTorch's (if, again, you do a slight rearranging of terms), and I've confirmed those results empirically as well.