Lightning-AI / pytorch-lightning

Pretrain, finetune ANY AI model of ANY size on multiple GPUs, TPUs with zero code changes.
https://lightning.ai
Apache License 2.0
28.33k stars 3.38k forks source link

switch from LBFGS to ADAM optimizer during the training loop #3664

Closed ghost closed 4 years ago

ghost commented 4 years ago

Is possible to show how we should write the "configure_optimizers" and "training_step" functions for the following code. The purpose of the code is to switch the optimizer from LBFGS to Adam when the loss_SUM<0.3

optimizer = optim.LBFGS(model.parameters(), lr=0.003)
Use_Adam_optim_FirstTime=True
Use_LBFGS_optim=True

for epoch in range(30000):
    loss_SUM = 0
    for i, (x, t) in enumerate(GridLoader):
        x = x.to(device)
        t = t.to(device)
        if Use_LBFGS_optim:
          def closure():
            optimizer.zero_grad()
            lg, lb, li = problem_formulation(x, t, x_Array,t_Array,bndry,pi)
            loss_total=lg+ lb+ li
            loss_total.backward(retain_graph=True)
            return loss_total
          loss_out=optimizer.step(closure)
          loss_SUM+=loss_out.item()
        elif Use_Adam_optim_FirstTime:
          Use_Adam_optim_FirstTime=False
          optimizerAdam = optim.Adam(model.parameters(), lr=0.0003)
          model.load_state_dict(checkpoint['model'])
          optimizerAdam.zero_grad()
          lg, lb, li = problem_formulation(x, t, x_Array,t_Array,bndry,pi)
          lg.backward()
          lb.backward()
          li.backward()
          optimizerAdam.step()
          loss_SUM += lg.item()+lb.item()+li.item()
        else:
          optimizerAdam.zero_grad()
          lg, lb, li = problem_formulation(x, t, x_Array,t_Array,bndry,pi)
          lg.backward()
          lb.backward()
          li.backward()
          optimizerAdam.step()
          loss_SUM += lg.item()+lb.item()+li.item()  
    if loss_SUM<.3 and use_LBFGS_optim == True:
      Use_LBFGS_optim=False
      checkpoint = {'model': model.state_dict(),
                    'optimizer': optimizer.state_dict()}
github-actions[bot] commented 4 years ago

Hi! thanks for your contribution!, great first issue!

awaelchli commented 4 years ago

Hi In your lightning module, you could do this:

def on_epoch_start(self):
    if self.loss_SUM > 0.3
        self.trainer.optimizers[0] = Adam(...)

and you start with LBFGS as default, returned in configure_optimizers.

rohitgr7 commented 4 years ago

I think this logic can now better be done in configure_optimizers itself in case someone has some crazy schedulers, or schedulers_dict as well and calling:

def on_epoch_start(self):
    if condition:
        self.trainer.accelerator_backend.setup_optimizers(self)

def configure_optimizers(self):
    if condition:
        return Adam(...)
    else:
        return LBFGS(...)
awaelchli commented 4 years ago

Moved this to the forum https://forums.pytorchlightning.ai/t/how-to-switch-from-optimizer-during-training/219

Thanks @rohitgr7