vineetjohn / linguistic-style-transfer

Neural network parametrized objective to disentangle and transfer style and content in text
Apache License 2.0
138 stars 33 forks source link

get_kl_loss and sample_prior function #66

Closed sakuranew closed 5 years ago

sakuranew commented 5 years ago

kl loss may be computed by def kl_divergence(p, q): return tf.reduce_sum(p * tf.log(p/q))

So I don't understand what get_kl_loss and sample_prior mean..... def get_kl_loss(self, mu, log_sigma): return tf.reduce_mean( input_tensor=-0.5 * tf.reduce_sum( input_tensor=1 + log_sigma - tf.square(mu) - tf.exp(log_sigma), axis=1))

vineetjohn commented 5 years ago

The KL loss equation in get_kl_loss is sourced from the equation in the Appendix - B of the original auto-encoding bayes paper. https://arxiv.org/pdf/1312.6114.pdf

sample_prior is just a helper function to sample points from a parameterized Gaussian distribution, the parameters being an estimated mean and estimated variance.

sakuranew commented 5 years ago

thank you