roimehrez / contextualLoss

The Contextual Loss
http://cgm.technion.ac.il/Computer-Graphics-Multimedia/Software/Contextual/
489 stars 78 forks source link

A question about contextual loss usage in super-resolution #11

Closed hahazhky closed 6 years ago

hahazhky commented 6 years ago

hello, I am trying to re-implement your paper Learning to Maintain Natural Image Statistics. In Section4.1 Proposed solution ,you said L2 loss is computed at low resolution, but in Equation(9) below, L2 loss is computed between G(s) and y, which are generated images and target images, they are high resolution, which is correct?

mikigom commented 6 years ago

@hahazhky I'm not author of the paper, but I think I can answer this question because it is clear.

G(s) and y is at high-resolution. For computing L2 loss, both G(s) and y are blurred by convolution filter. The blurred G(s) and y are represented by G(s)^{LF} and y^{LR}, respectively. Thus, it is not neither original HR or LR resolution.

I think the authors use this blurring to avoid using L2 loss on high-resolution and to make high-resolution reconstruction little uncertain. (This sentence is my private opinion.)

roimehrez commented 6 years ago

Thanks @mikigom. What you wrote is 100% correct. The L2 blury version (note as LF) allows the GAN to freely generate the right texture.

*Note that low frequency is not low resolution rather the blury version of the high resolution

fsalmasri commented 5 years ago

There is something missing here, I can't find in there code anything about the LF image comparison, it seems also like the network is learning how produce the invert of Gaussian kernel.