eladrich / pixel2style2pixel

Official Implementation for "Encoding in Style: a StyleGAN Encoder for Image-to-Image Translation" (CVPR 2021) presenting the pixel2style2pixel (pSp) framework
https://eladrich.github.io/pixel2style2pixel/
MIT License
3.2k stars 568 forks source link

learning rate and optimizer phase when resuming training #189

Closed YSL0226 closed 3 years ago

YSL0226 commented 3 years ago

We can resume training by 'checkpoint_path' command.

Did you resume learning rate and optimizer phase by 'checkpoint_path' command? Because I didn't find it in the code, maybe I miss it? If you did it, could you help me to point it out? If not, how do we resume learning rate and optimizer phase? Looking forward to your reply.

yuval-alaluf commented 3 years ago

We don't save the optimizer state when saving the checkpoint, mainly because it makes the saved pt files quite large (around 4GB if I remember correctly). If you do want to add this, you can do this by following this guide: https://pytorch.org/tutorials/beginner/saving_loading_models.html#saving-loading-a-general-checkpoint-for-inference-and-or-resuming-training