pytorch / ignite

High-level library to help with training and evaluating neural networks in PyTorch flexibly and transparently.
https://pytorch-ignite.ai
BSD 3-Clause "New" or "Revised" License
4.52k stars 615 forks source link

How can I use the LBFGS optimizer with ignite? #610

Closed riverarodrigoa closed 5 years ago

riverarodrigoa commented 5 years ago

Hi all,

I started using Ignite recently and i found it very interesting. I would like to train a model using as an optimizer the LBFGS algorithm from the torch.optim module.

This is my code:

from ignite.engine import Events, Engine, create_supervised_trainer, create_supervised_evaluator
from ignite.metrics import RootMeanSquaredError, Loss
from ignite.handlers import EarlyStopping

 D_in, H, D_out = 5, 10, 1
 model = simpleNN(D_in, H, D_out)
 model.double()
 train_loader, val_loader = get_data_loaders(i)

 optimizer = torch.optim.LBFGS(model.parameters(), lr=1)
 loss_func = torch.nn.MSELoss()  

 #Ignite
 trainer = create_supervised_trainer(model, optimizer, loss_func)
 evaluator = create_supervised_evaluator(model, metrics={'RMSE': RootMeanSquaredError(),'LOSS': Loss(loss_func)})

 @trainer.on(Events.ITERATION_COMPLETED)
 def log_training_loss(engine):
     print("Epoch[{}] Loss: {:.5f}".format(engine.state.epoch, len(train_loader), engine.state.output))

def score_function(engine):
    val_loss = engine.state.metrics['RMSE']
    print("VAL_LOSS: {:.5f}".format(val_loss))
    return -val_loss

handler = EarlyStopping(patience=10, score_function=score_function, trainer=trainer)
evaluator.add_event_handler(Events.COMPLETED, handler)

trainer.run(train_loader, max_epochs=100)

And the error that raises is: TypeError: step() missing 1 required positional argument: 'closure'

I know that is required to define a closure for the implementation of LBFGS, so my question is how can I do it using ignite? or is there another approach for doing this?

Thank you

vfdev-5 commented 5 years ago

Hi @riverarodrigoa ,

I do not know exactly how works LBFGS with closures, but with ignite it could be probably used like that:


from ignite.engine import Engine

model = ...
optimizer = torch.optim.LBFGS(model.parameters(), lr=1)
criterion = 

def update_fn(engine, batch):
     model.train()
    x, y = batch
    # pass to device if needed as here: https://github.com/pytorch/ignite/blob/40d815930d7801b21acfecfa21cd2641a5a50249/ignite/engine/__init__.py#L45
    def closure():
        y_pred = model(x)
        loss = criterion(y_pred, y)
        optimizer.zero_grad()
        loss.backward()
        return loss

     optimizer.step(closure)

trainer = Engine(update_fn)

# everything else is the same

Let me know if this solves the problem.

PS. Please read Concepts for more details

vfdev-5 commented 5 years ago

Close the issue as answered. Feel free to reopen if needed