Simplest working implementation of Stylegan2, state of the art generative adversarial network, in Pytorch. Enabling everyone to experience disentanglement
Hi. I've found, that you unfreeze the whole GAN, and making steps only via specific optimizer (for generator and discriminator). But when you do loss.backward, gradients are computed for the WHOLE GAN, whereas for certain optimizer only their own gradients are needed. It causes additional memory uses and increased iteration time.
Please correct me if I am wrong.
Hi. I've found, that you unfreeze the whole GAN, and making steps only via specific optimizer (for generator and discriminator). But when you do loss.backward, gradients are computed for the WHOLE GAN, whereas for certain optimizer only their own gradients are needed. It causes additional memory uses and increased iteration time. Please correct me if I am wrong.