Open talesa opened 4 years ago
Yes, it seems like that treatment for n_bits
is problematic compared to the official implementation. (I don't know why I have missed it.)
I have used n_bits = 5
because the official implementation have used it for celeba-hq.
I have used n_bits = 5 because the official implementation have used it for celeba-hq.
It seems to me that this implementation is only using the 8-bit version of the dataset (the default if I'm not mistaken), as it doesn't seem to be decreasing the number-of-bits of the input data like in https://github.com/openai/glow/blob/654ddd0ddd976526824455074aa1eaaa92d095d8/model.py#L153-L158, correct me if I'm wrong somewhere, I don't know the openai/glow repo much.
I have forgot to add it. Anyway, 97081ff will resolve the issue.
Thanks a lot! I'm sorry I haven't created a pull request straight away myself, I just wanted to check it with you first!
I think there might be a tiny mistake in the dequantization process at the moment.
I think that https://github.com/rosinality/glow-pytorch/blob/master/train.py#L99 should be
n_bins = 2. ** args.n_bits - 1.
rather thann_bins = 2. ** args.n_bits
since as far as I understand, in the following code snippet the minimum difference in the input levels/bins values(a[1:]-a[:-1]).min()
should be the same as1/n_bins
(run afterimage, _ = next(dataset)
on line 109 intrain.py
https://github.com/rosinality/glow-pytorch/blob/master/train.py#L109)Also, it's a bit confusing that by default, the
n_bits
is set to 5, whereas by defaultn_bits
for CelebA is 8, I'd change it to 8.