openai / glow

Code for reproducing results in "Glow: Generative Flow with Invertible 1x1 Convolutions"
https://arxiv.org/abs/1807.03039
MIT License
3.11k stars 515 forks source link

Is this a bug? #67

Open leao1995 opened 5 years ago

leao1995 commented 5 years ago

https://github.com/openai/glow/blob/eaff2177693a5d84a1cf8ae19e8e0441715b82f8/model.py#L552

this is equivalent to evaluate (z-mean)/scale on a standard Gaussian, but you didn't account for the determinant of this transformation.

naturomics commented 5 years ago

pz.logp(z) calculates p(z)~N(z;mean, scale), not p(z)~N(z;0, I), so there are no more transformation.

ryokamoi commented 5 years ago

I have a similar question as @leao1995. If the identical prior distribution is used for all input, the discussion by @naturomics should be true. However, in this code, logp(z2) depends on z1, so I think additional determinant for this transformation is required (I mean, changing mean and variance can be regarded as a transformation). https://github.com/openai/glow/blob/eaff2177693a5d84a1cf8ae19e8e0441715b82f8/model.py#L549-L552 https://github.com/openai/glow/blob/eaff2177693a5d84a1cf8ae19e8e0441715b82f8/model.py#L577-L580

My understanding might be wrong. In the first place, I wonder why trainable distribution is used instead of N(z;0, I).