Open swyoon opened 4 years ago
For a glow model, x and z should have the same shape. However, part of z is abandoned during the split2d operation. That may be the reason why x and x_hat are not similiar at all.
# decode
x_hat = graph(z, reverse=True)
If you leave the parameter unnamed, it assumes you supply x
as input, leaving z
as None
. Try
# decode
x_hat = graph(z=z, reverse=True)
Hi,
First of all, thank you for a nice repo.
I am trying to map an image
x
to a latent representationz
and then map back to its reconstructionx_hat
with a trained model, butx
andx_hat
are not similar at all.I understand that it may not be totally identical due to
Split2d
layer, but the degree of difference is way too severe.I ran the training script, and the reconstructed images shown in Tensorboard are much similar to their corresponding inputs.
Here are rough snippets that can reproduce my problem.
For defining and loading dataset and model,
Now I want to encode a batch of images and decode them back.
When I visualize
img_x
andx_hat
, I found a significant discrepancy.As you can see, those images are very different.
As a sanity check, I ran unconditioned sampling. It gives reasonably fine images, especially with apt choice
eps_std
, so the model is well-trained.