Closed ThisIsIsaac closed 5 years ago
I tried spectral normalization, but I couldn't make it work. Maybe I can try again this weekend.
I use LearningToPaint frame to another topic, and WGAN-GP with spectral normalization can improve the stability of training. In my experiment, soft target and specnorm are not compatible in D training.
Thanks for sharing the insight :)
Have you tried spectral normalization GAN & adding L1 distance to WGAN loss? I wonder how these two changes could impact the performance:
1. Replacing WGAN-GP with spectral normalization
Spectral normalization has two main advantages:
Slight performance improvement relative to WGAN-GP on ResNet. The inception score of spectral normalization had a slight upper hand — approximately 0.16 — with less deviation compared to WGAN-GP.
Spectral normalization is ~30% more computationally efficient. Since both actors and critics use ResNet as the backbone, replacing WGAN-GP with spectral normalization can potentially yield meaningful results.
2. Combining WGAN-GP with spectral normalization
The authors of the spectral normalization paper suggest that combining WGAN-GP with spectral normalization can further improve the results compared to the baseline WGA-GP and spectral normalization GAN.