Open leao1995 opened 5 years ago
pz.logp(z) calculates p(z)~N(z;mean, scale), not p(z)~N(z;0, I), so there are no more transformation.
I have a similar question as @leao1995. If the identical prior distribution is used for all input, the discussion by @naturomics should be true. However, in this code, logp(z2) depends on z1, so I think additional determinant for this transformation is required (I mean, changing mean and variance can be regarded as a transformation). https://github.com/openai/glow/blob/eaff2177693a5d84a1cf8ae19e8e0441715b82f8/model.py#L549-L552 https://github.com/openai/glow/blob/eaff2177693a5d84a1cf8ae19e8e0441715b82f8/model.py#L577-L580
My understanding might be wrong. In the first place, I wonder why trainable distribution is used instead of N(z;0, I).
https://github.com/openai/glow/blob/eaff2177693a5d84a1cf8ae19e8e0441715b82f8/model.py#L552
this is equivalent to evaluate (z-mean)/scale on a standard Gaussian, but you didn't account for the determinant of this transformation.