Closed jhauret closed 3 months ago
All modified and coverable lines are covered by tests :white_check_mark:
Project coverage is 70%. Comparing base (
d85a312
) to head (c39a740
).
Thank you and @jhauret if you would like to continue with the refactoring any time in the future, all help is welcome! :rabbit:
Before submitting
What does this PR do?
Fixes https://github.com/Lightning-AI/tutorials/issues/306
PR review
Anyone in the community is free to review the PR once the tests have passed. If we didn't discuss your PR in Github issues there's a high chance it will not be merged.
Did you have fun?
I had ! Glad to contribute to the best Pytorch wrapper 🙃
Notes
The quality of the generated samples remains low, but I don't have the time to tune the architecture more. Also, applying tricks to reach Nash equilibrium like label smoothing could make the example less understandable, so I left it as it is.
This PR was motivated in the case of some heavy generator and discriminator that we don't want to call multiple times. I removed the repeated calls to the generator (which changes the optimization problem a bit, because the discriminator is trained to detect the same sample that the generator just generated) BUT I did not remove the repeated calls to the discriminator. In fact, trying
self.manual_backward(g_loss, retain_graph=True)
caused an error when backpropagating on the discriminator loss because the generator weights changed (RuntimeError: one of the variables needed for gradient computation has been modified by an inplace operation
). If anyone knows how to remove the two calls to the discriminator, I'm curious!Logging is fixed and
on_validation_epoch_end
is now called