andreas128 / SRFlow

Official SRFlow training code: Super-Resolution using Normalizing Flow in PyTorch
Other
824 stars 112 forks source link

why doesn't LR encoding network g_θ need to be invertible? #26

Open qiufengmama opened 3 years ago

martin-danelljan commented 3 years ago

Because it is only used for conditioning. Please see the paper if you want the in depth explanation and derivation.

nachifur commented 3 years ago

@martin-danelljan Thank you for your wonderful work. The place I am puzzled is: why the variance of z is 0, and the network can still output a better psnr super-resolution image, because this does not add high-frequency details to the network. I look forward to your reply.

nachifur commented 3 years ago

My other problem is that when training to DF2K_4x, the verification output is completely black when training to 160000. It is normal for 80000. For more iterations, the output is all black. This seems to be abnormal. Can you explain this?

0_000080000_h050_s1: 0_000080000_h050_s1 0_000160000_h050_s1: 0_000160000_h050_s1

The drop in loss seems to be correct. image