About using Wasserstein loss function for GANs in order to keep a meaning full gradient from the discriminator.
This addresses the problem that the discriminator can learn its task to discriminate too well and therefore doesn't provide feedback anymore for the generator how to improve. This results in a zero gradient for the generator.
I haven't checked links below yet (no headphones with me):
GAN Lecture 6 (2018): WGAN, EBGAN https://youtu.be/3JP-xuBJsyc via @YouTube
Nuts and Bolts of WGANs, Kantorovich-Rubistein Duality, Earth Movers Dis... https://youtu.be/31mqB4yGgQY via @YouTube
About using Wasserstein loss function for GANs in order to keep a meaning full gradient from the discriminator.
This addresses the problem that the discriminator can learn its task to discriminate too well and therefore doesn't provide feedback anymore for the generator how to improve. This results in a zero gradient for the generator.
I haven't checked links below yet (no headphones with me): GAN Lecture 6 (2018): WGAN, EBGAN https://youtu.be/3JP-xuBJsyc via @YouTube Nuts and Bolts of WGANs, Kantorovich-Rubistein Duality, Earth Movers Dis... https://youtu.be/31mqB4yGgQY via @YouTube