heykeetae / Self-Attention-GAN

Pytorch implementation of Self-Attention Generative Adversarial Networks (SAGAN)
2.53k stars 475 forks source link

hinge loss #20

Open IPNUISTlegal opened 6 years ago

IPNUISTlegal commented 6 years ago

image i find citation 13,16,30 and do not know exact principle of hinge loss. i feel confused about why don't we use WGAN loss function. cause it has better performance than WGAN loss function?

ShihuaHuang95 commented 6 years ago

The author has stated "Remove all the spectral normalization at the model for the adoption of wgan-gp" in Meta overview Section within README.md, you can test whether or not.