Lightning-AI / pytorch-lightning

Pretrain, finetune ANY AI model of ANY size on multiple GPUs, TPUs with zero code changes.
https://lightning.ai
Apache License 2.0
28.26k stars 3.38k forks source link

How to use the lbfgs optimizer with ligthning? #246

Closed riverarodrigoa closed 5 years ago

riverarodrigoa commented 5 years ago

Hi, I have a problem in using the LBFGS optimizer from pytorch with lightning. I use the template from here to start a new project and here is the code that I tried (only the training portion):

    def training_step(self, batch, batch_nb):
        x, y = batch
        x = x.float()
        y = y.float()
        y_hat = self.forward(x)
        return {'loss': F.mse_loss(y_hat, y)}

    def configure_optimizers(self):
        optimizer = torch.optim.LBFGS(self.parameters())
        return optimizer

    def optimizer_step(self, epoch_nb, batch_nb, optimizer, optimizer_i):
        def closure():
            optimizer.zero_grad()
            l = self.training_step(batch, batch_nb)
            loss = l['loss']
            loss.backward()
            return loss

        optimizer.step(closure)

The LBFGS optimizer from pytorch requires a closure function (see here and here), but I don't know how to define it inside the template, specially I don't know how the batch data is passed to the optimizer. I tried to define a custom optimizer_step function but I have some problems to passing the batch inside the closure function.

I will be very thankful of any advise that helps me to solve this problem or points me to the right direction. Rodrigo

Environment:

williamFalcon commented 5 years ago

@riverarodrigoa good suggestion. Agreed we need to support this. Probably the easiest way is to refactor the main code in trainer or detect lbfgs/similar optimizers and pass in the closure. Do you want to submit a PR for this? if not I can take a look at it

riverarodrigoa commented 5 years ago

Hi @williamFalcon, thanks for looking at my question. I am still learning pytorch and understanding lightning, I think it would be better if you look at it as you has a more deep understanding of the framework. By my side I will try to work on this too and any improvement I do I'll update on this issue.

williamFalcon commented 5 years ago

@riverarodrigoa Support added in #310. If you run from master lbfgs will work now

riverarodrigoa commented 5 years ago

Hi, thanks for adding support to LBFGS. I tried to test it but I have found that my loss is increasing at every epoch. Could you tell me what is wrong with my code? Am I missing or defining something wrong?

I'm trying to implement a simple MLP with 10 units in the hidden layer. This is my code:

import os
from collections import OrderedDict
import torch.nn as nn
from torchvision.datasets import MNIST
import torchvision.transforms as transforms
import torch
import torch.nn.functional as F
from argparse import ArgumentParser
from torch import optim
from torch.utils.data import DataLoader
from torch.utils.data.distributed import DistributedSampler

import pytorch_lightning as pl
from pytorch_lightning.root_module.root_module import LightningModule
from figaro_mlp.mlp2.Data_Figaro import DataFigaroTrain, DataFigaroValid, DataFigaroTest, ToTensor

class FigaroMLP(LightningModule):
    """
    Sample model to show how to define a template
    """

    def __init__(self, hparams):
        """
        Pass in parsed HyperOptArgumentParser to the model
        :param hparams:
        """
        # init superclass
        super(FigaroMLP, self).__init__()
        self.hparams = hparams

        self.batch_size = hparams.batch_size

        # if you specify an example input, the summary will show input/output for each layer
        # self.example_input_array = torch.rand(5, 28 * 28)

        # build model
        self.__build_model()

    # ---------------------
    # MODEL SETUP
    # ---------------------
    def __build_model(self):
        """
        Layout model
        :return:
        """
        self.hidden_layer = nn.Linear(in_features=self.hparams.in_features,
                                      out_features=self.hparams.hidden_dim)
        self.output_layer = nn.Linear(in_features=self.hparams.hidden_dim,
                                      out_features=self.hparams.out_features)
    # ---------------------
    # TRAINING
    # ---------------------
    def forward(self, x):
        """
        No special modification required for lightning, define as you normally would
        :param x:
        :return:
        """
        h = torch.sigmoid(self.hidden_layer(x))
        res = self.output_layer(h)
        return res

    def loss(self, reference, out):
        mse = F.mse_loss(out, reference)
        return mse

    def training_step(self, batch, batch_idx):
        """
        Lightning calls this inside the training loop
        :param batch:
        :return:
        """
        # # forward pass
        x, y = batch

        y_hat = self.forward(x)

        # calculate loss
        loss_val = self.loss(y, y_hat)

        # in DP mode (default) make sure if result is scalar, there's another dim in the beginning
        if self.trainer.use_dp or self.trainer.use_ddp2:
             loss_val = loss_val.unsqueeze(0)

        tqdm_dict = {'train_loss': loss_val}
        output = OrderedDict({
            'loss': loss_val,
            'progress_bar': tqdm_dict,                       
            'log': tqdm_dict
        })

        # can also return just a scalar instead of a dict (return loss_val)
        return output

    def validation_step(self, batch, batch_idx):
        """
        Lightning calls this inside the validation loop
        :param batch:
        :return:
        """
        x, y = batch
        y_hat = self.forward(x)

        loss_val = self.loss(y, y_hat)

        if self.on_gpu:
            val_acc = val_acc.cuda(loss_val.device.index)

        # in DP mode (default) make sure if result is scalar, there's another dim in the beginning
        if self.trainer.use_dp or self.trainer.use_ddp2:
            loss_val = loss_val.unsqueeze(0)

        output = OrderedDict({
            'val_loss': loss_val,
        })

        # can also return just a scalar instead of a dict (return loss_val)
        return output

    def validation_end(self, outputs):
        """
        Called at the end of validation to aggregate outputs
        :param outputs: list of individual outputs of each validation step
        :return:
        """
        # if returned a scalar from validation_step, outputs is a list of tensor scalars
        # we return just the average in this case (if we want)
        # return torch.stack(outputs).mean()
        val_loss_mean = 0
        for output in outputs:
            val_loss = output['val_loss']

            # reduce manually when using dp
            if self.trainer.use_dp:
                val_loss = torch.mean(val_loss)
            val_loss_mean += val_loss

        val_loss_mean /= len(outputs)
        tqdm_dict = {'val_loss': val_loss_mean}
        result = {'progress_bar': tqdm_dict, 'log': tqdm_dict}
        result = {'val_loss': val_loss_mean, 'progress_bar': {'val_loss': val_loss_mean}}
        return result

    # ---------------------
    # TRAINING SETUP
    # ---------------------
    def configure_optimizers(self):
        """
        return whatever optimizers we want here
        :return: list of optimizers
        """
        optimizer = optim.LBFGS(self.parameters(), lr=1)
        # scheduler = optim.lr_scheduler.CosineAnnealingLR(optimizer, T_max=10)
        return [optimizer], []

    def __dataloader(self, train):
        xvars = ['R_2611_C01', 'T_Air', 'H2Od_pc', 'P_Air', 'CO_ppm']
        yvars = ['CH4d_ppm']
        sample = 0

        if train:
            dataset = DataFigaroTrain(filepath=self.hparams.data_path, xvars=xvars, yvars=yvars,
                                      sample=sample, transform=transforms.Compose([ToTensor()]))
        else:
            dataset = DataFigaroValid(filepath=self.hparams.data_path, xvars=xvars, yvars=yvars,
                                      sample=sample, transform=transforms.Compose([ToTensor()]))

        batch_size = self.hparams.batch_size
        should_shuffle = False

        loader = DataLoader(
            dataset=dataset,
            batch_size=dataset.__len__(),
            shuffle=should_shuffle,
        )

        return loader

    @pl.data_loader
    def train_dataloader(self):
        print('training data loader called')
        return self.__dataloader(train=True)

    @pl.data_loader
    def val_dataloader(self):
        print('val data loader called')
        return self.__dataloader(train=False)

    @pl.data_loader
    def test_dataloader(self):
        print('test data loader called')
        return self.__dataloader(train=False)

    @staticmethod
    def add_model_specific_args(parent_parser, root_dir):  # pragma: no cover
        """
        Parameters you define here will be available to your model through self.hparams
        :param parent_parser:
        :param root_dir:
        :return:
        """
        parser = ArgumentParser(parents=[parent_parser])

        # param overwrites
        # parser.set_defaults(gradient_clip_val=5.0)

        # network params
        parser.add_argument('--in_features', default=5, type=int)
        parser.add_argument('--out_features', default=1, type=int)
        # use 500 for CPU, 50000 for GPU to see speed difference
        parser.add_argument('--hidden_dim', default=10, type=int)
        parser.add_argument('--learning_rate', default=0.1, type=float)

        # data
        parser.add_argument('--data_root', default='data_path/', type=str)

        # training params (opt)
        parser.add_argument('--optimizer_name', default='lbfgs', type=str)
        parser.add_argument('--batch_size', default=1, type=int)
        return parser

Thank you in advance.

williamFalcon commented 5 years ago

your learning rate is 1. i suggest looking into how to select learning rates (coursera, etc). try 0.001

williamFalcon commented 5 years ago

your learning rate is 1. i suggest looking into how to select learning rates (coursera, etc). try 0.001