SayedNadim / Global-and-Local-Attention-Based-Free-Form-Image-Inpainting

Official implementation of "Global and local attention-based free-form image inpainting"
Other
57 stars 8 forks source link

Lorentzian distance #14

Closed ray0809 closed 4 years ago

ray0809 commented 4 years ago

Hi, I found the formula of Lorentzian metric in G and D loss actually equivalent https://github.com/SayedNadim/Global-and-Local-Attention-Based-Free-Form-Image-Inpainting/blob/0b7eec3154bed9c646e18b77570d520efeb9f9ab/scripts/trainer.py#L57 https://github.com/SayedNadim/Global-and-Local-Attention-Based-Free-Form-Image-Inpainting/blob/0b7eec3154bed9c646e18b77570d520efeb9f9ab/scripts/trainer.py#L45

When I training with this code, the discriminator adversarial loss and generator adversarial loss almost no volatility Maybe the d_loss_loren rewrite as something like d_loss_loren = log(2) - mean(log(1 + abs(P - Q))?

SayedNadim commented 4 years ago

Hi, Thanks for the excellent question. The idea of integrating Lorentzian loss was to give the same penalty to both generator and discriminator. However, if you want to have a volatile penalty to the individual, you can rewrite the function as you have stated. In my opinion, you need to set a weight for the d_loss_loren, so that it does not over-penalize the generator. Cheers!

ray0809 commented 4 years ago

Hi, From the form of the formula, Lorentzian will force the distribution of real and fake to be closer log(1 + abs(P - Q))-> 0 Equivalent to P-Q -> 0 Isn't that against discriminator's intention

SayedNadim commented 4 years ago

Hi, The relativistic gan paper states, the discriminator estimates the probability that the given real data is more realistic than fake data, on average, which means by enforcing discriminator to give more focus on real data, which eventually forces the generator to generate more realistic data. However, I did not try with your modification. Please feel free to share your findings. Let me know if I can help with anything. Cheers!

ray0809 commented 4 years ago

Thanks for your reply! I will try it.