pat-coady / trpo

Trust Region Policy Optimization with TensorFlow and OpenAI Gym
https://learningai.io/projects/2017/07/28/ai-gym-workout.html
MIT License
360 stars 106 forks source link

Mistake in KL divergence formula #25

Closed ghost closed 5 years ago

ghost commented 5 years ago

Hi,

There is a small mistake in the policy.py file when you calculate the kl divergence between two multivariate normal distributions :

self.kl = 0.5 * tf.reduce_mean(log_det_cov_new - log_det_cov_old + tr_old_new + tf.reduce_sum(tf.square(self.means - self.old_means_ph) / tf.exp(self.log_vars), axis=1) - self.act_dim)

The ratio of the covariances i.e. tr_old_new should be squared in the KL divergence i.e. tr_old_new just needs to be replaced with tr_old_new**2.