Closed yian2271368 closed 4 years ago
@yian2271368 Instead of using the 2-class classification loss, lsgan optimizes the mse between the generated confidence map and 1.
@layumi hi, thanks for replying. I understand lsgan uses mse to optimizes, but from my unstanding, it should be a scalar - 1 instad of the whole feature map -1 ? in other words, in the euqation, out0 should be a scalar instead of a feature map.
@yian2271368 The idea is from PatchGAN, which used the loss on feature map to supervise the discriminator. For my experience, it is better than using the scalar.
@layumi thanks a lot! That will make much more sense.
https://github.com/NVlabs/DG-Net/blob/0abf564a853ea6ec3f38ab71a4a69f7f23b19d24/networks.py#L147
hey there, here the "outs0" is a feature matrix in shape of 8x1x64x32, and you used it to minus 1 and take the mean of them as loss. what is the principle behind this?