Closed BridgetteSong closed 3 years ago
in theposterior.py code: time_level_log_probs = -0.5 * (tf.cast(dim, tf.float32) * tf.math.log(2 * np.pi)+ tf.reduce_sum(expanded_logvar + normalized_samples ** 2., axis=3))
posterior.py
time_level_log_probs = -0.5 * (tf.cast(dim, tf.float32) * tf.math.log(2 * np.pi)+ tf.reduce_sum(expanded_logvar + normalized_samples ** 2., axis=3))
but the log_prob of gaussian: log_probs = log(1.0 / (sqrt(2.0 * pi) * std) * exp(-0.5 * (x-u) ** 2 / std ** 2)) = -0.5 * (log(2.0 * pi) + 2.0 * log(std) + (x-u) ** 2 / std ** 2)
log_probs = log(1.0 / (sqrt(2.0 * pi) * std) * exp(-0.5 * (x-u) ** 2 / std ** 2)) = -0.5 * (log(2.0 * pi) + 2.0 * log(std) + (x-u) ** 2 / std ** 2)
so you miss a constant value 2.0 of expanded_logvar although it doesn't matter? time_level_log_probs = -0.5 * (tf.cast(dim, tf.float32) * tf.math.log(2 * np.pi)+ tf.reduce_sum(2.0 * expanded_logvar + normalized_samples ** 2., axis=3))
2.0
expanded_logvar
time_level_log_probs = -0.5 * (tf.cast(dim, tf.float32) * tf.math.log(2 * np.pi)+ tf.reduce_sum(2.0 * expanded_logvar + normalized_samples ** 2., axis=3))
Thanks for your comments and rigorous derivation! The logvar here is actually the log(std ** 2.0) in your equation. @BridgetteSong
in the
posterior.py
code:time_level_log_probs = -0.5 * (tf.cast(dim, tf.float32) * tf.math.log(2 * np.pi)+ tf.reduce_sum(expanded_logvar + normalized_samples ** 2., axis=3))
but the log_prob of gaussian:
log_probs = log(1.0 / (sqrt(2.0 * pi) * std) * exp(-0.5 * (x-u) ** 2 / std ** 2)) = -0.5 * (log(2.0 * pi) + 2.0 * log(std) + (x-u) ** 2 / std ** 2)
so you miss a constant value
2.0
ofexpanded_logvar
although it doesn't matter?time_level_log_probs = -0.5 * (tf.cast(dim, tf.float32) * tf.math.log(2 * np.pi)+ tf.reduce_sum(2.0 * expanded_logvar + normalized_samples ** 2., axis=3))