Closed zxyl1003 closed 1 year ago
How do I get lr_scheduler to update the learning rate at epoch intervals and eliminate warnings. I show all my code below:
To update the scheduler at epoch interval in manual optimization, you do it here:
def on_train_epoch_end(self) -> None:
scheduler_g, scheduler_d = self.lr_schedulers()
scheduler_d.step()
scheduler_g.step()
Is that what you were looking for? Note, the interval entry in the configuration dict returned in configure_optimizer
only applies in automatic optimization.
Thank you!!! @awaelchli
Bug description
When I was training the SRGAN network, I wanted to implement training both CNN and GAN. when having multiple optimizers, I wrote the code according to the example in the documentation and configured the interval of lr_scheduler to epoch in the
configure_optimizer
function, but I found that this did not work and the learning rate remained at batch step as the interval update. In addition to that, when I useopt.zero_grad()
, I get a warning :"Reference to 'zero_grad' not found in 'LightningOptimizer | list'". How do I get lr_scheduler to update the learning rate at epoch intervals and eliminate warnings. I show all my code below:What version are you seeing the problem on?
v2.0
How to reproduce the bug
Error messages and logs
Environment
Current environment
``` #- Lightning Component: Trainer, LightningModule #- PyTorch Lightning Version: 2.0.0 #- PyTorch Version: 2.0.0+cu117 #- Python version: 3.10 #- OS: Windows #- CUDA/cuDNN version: cu117 #- GPU models and configuration: rtx 3060 6g #- How you installed Lightning(`conda`, `pip`, source): pip ```More info
PS: I found that
flush_logs_every_n_steps
inCSVLogger()
doesn't work, it's actually thelog_every_n_steps
parameter inTrainer()
that controls the interval of writing logs.cc @borda