NLeSC / Machine_Learning_SIG

The topics discussed in the Machine Learning SIG group.
12 stars 4 forks source link

wgan #6

Open cwmeijer opened 5 years ago

cwmeijer commented 5 years ago

About using Wasserstein loss function for GANs in order to keep a meaning full gradient from the discriminator.

This addresses the problem that the discriminator can learn its task to discriminate too well and therefore doesn't provide feedback anymore for the generator how to improve. This results in a zero gradient for the generator.

I haven't checked links below yet (no headphones with me): GAN Lecture 6 (2018): WGAN, EBGAN https://youtu.be/3JP-xuBJsyc via @YouTube Nuts and Bolts of WGANs, Kantorovich-Rubistein Duality, Earth Movers Dis... https://youtu.be/31mqB4yGgQY via @YouTube