Open lifelongeek opened 7 years ago
In "Improved Training of Wasserstein GAN", they use gradient norm penalty term. (https://arxiv.org/pdf/1704.00028.pdf)
This requires computing gradient of norm(Model.gradInput) w.r.t. Model parameters, and then accumulate to the gradParams of the Model.
Is this computation available with current torch's implementation? Or do you plan to update any related features?
Thanks for developer's hard work.
In "Improved Training of Wasserstein GAN", they use gradient norm penalty term. (https://arxiv.org/pdf/1704.00028.pdf)
This requires computing gradient of norm(Model.gradInput) w.r.t. Model parameters, and then accumulate to the gradParams of the Model.
Is this computation available with current torch's implementation? Or do you plan to update any related features?
Thanks for developer's hard work.