Wenchao-Du / LIR-for-Unsupervised-IR

This is an implementation for the CVPR2020 paper "Learning Invariant Representation for Unsupervised Image Restoration"
https://arxiv.org/pdf/2003.12769.pdf
98 stars 21 forks source link

About KL Loss #3

Open qibao77 opened 4 years ago

qibao77 commented 4 years ago

The work is interesting! However, in your paper, you only add a KL divergence loss to regularize the distribution of the noise code, while you add KL loss to all latent features in your open source code. Why is there such a difference? Is KL loss important to the final result?

Wenchao-Du commented 4 years ago

Adding the KL loss for latent codes is only used to validate the effects of loss functions in my exps, e.g., jointing gan loss and the KL loss, but the results in my exps show it has a little effect for metrics. You could also remove it from the source codes.

qibao77 commented 4 years ago

Thank you for your reply!

qibao77 commented 4 years ago

Adding the KL loss for latent codes is only used to validate the effects of loss functions in my exps, e.g., jointing gan loss and the KL loss, but the results in my exps show it has a little effect for metrics. You could also remove it from the source codes.

Another problem, I found that the KL loss in your code is actually L2 regularization (only using the mean), but the KL loss of VAE should include mean and var. Why is there such a difference?

yuguochencuc commented 3 years ago

I have the same question about the KL loss,. It seems like the author only use the L2 to make the mean to zero, which is different from the regular KL divergence. So can you explain the difference?