jalola / improved-wgan-pytorch

Improved WGAN in Pytorch
MIT License
439 stars 68 forks source link

Generator loss function #15

Closed atheeth96 closed 4 years ago

atheeth96 commented 4 years ago

During the training of the generator, the gradients are calculated w.r.t D(G(z)) instead of -D(G(Z)).

jalola commented 4 years ago

Hi,

When we run backward for G, we put minus one (-1) to calculate the gradients. This is kind of stupid during the implementation in the past.

It should be like this: gen_cost = - aD(fake_data) gen_cost.backward()

https://github.com/jalola/improved-wgan-pytorch/blob/master/train.py#L132