Open lxuechen opened 6 years ago
The code in model.py
essentially performs:
n_bins = 2^n_bits_x
x = floor(x / 2^(8 - n_bits_x))
x = x / n_bins - 0.5
Which can be more easily understood as:
n_bins = 2^n_bits_x
x = x / 2^8 # Scale.
x = floor(x * n_bins) / n_bins # Zero n_bits_x least significant bits.
x = x - 0.5 # Center.
As the transformed inputs are modeled, no term is needed to account for these transformations.
Hi,
I understand that this might be a small issue and that you guys don't report bits/dim for celebA-HQ (256x256x3), but I think it's worth noting that the log-determinant might be off by a constant since we have an extra division in the preprocessing step (https://github.com/openai/glow/blob/654ddd0ddd976526824455074aa1eaaa92d095d8/model.py#L156).
If I'm not mistaken, this term is not added to the objective later on. Could you verify this? This might be of interest to future work which could intend on reporting the numbers for celebA-HQ (256x256x3)
Thanks, Chen