DiffEqML / torchdyn

A PyTorch library entirely dedicated to neural differential equations, implicit models and related numerical methods
https://torchdyn.org
Apache License 2.0
1.4k stars 130 forks source link

Implicit Euler method is not compatible with neural ode training #156

Open cyx96 opened 2 years ago

cyx96 commented 2 years ago

First of all, thank you for this amazing library!

For my own research, I want to test implicit integrators with adjoint sensitivity method to train a neural network. While implicit Euler method provided by the library can be used to integrate ODEs (no gradient required), it is incompatible with adjoint sensitivity method which requires gradient information. The code snippet to reproduce the error is provided as below.

import torch
import torch.nn as nn
import torch.utils.data as data

import pytorch_lightning as pl

from torchdyn.core import NeuralODE
from torchdyn.datasets import *
from torchdyn import *

device = torch.device("cpu") # all of this works in GPU as well :)

X_train = torch.Tensor(X).to(device)
y_train = torch.LongTensor(yn.long()).to(device)
train = data.TensorDataset(X_train, y_train)
trainloader = data.DataLoader(train, batch_size=len(X), shuffle=True)

class Learner(pl.LightningModule):
    def __init__(self, t_span:torch.Tensor, model:nn.Module):
        super().__init__()
        self.model, self.t_span = model, t_span

    def forward(self, x):
        return self.model(x)

    def training_step(self, batch, batch_idx):
        x, y = batch
        t_eval, y_hat = self.model(x, t_span)
        y_hat = y_hat[-1] # select last point of solution trajectory
        loss = nn.CrossEntropyLoss()(y_hat, y)
        return {'loss': loss}

    def configure_optimizers(self):
        return torch.optim.Adam(self.model.parameters(), lr=0.01)

    def train_dataloader(self):
        return trainloader

f = nn.Sequential(
        nn.Linear(2, 16),
        nn.Tanh(),
        nn.Linear(16, 2)
    )

model = NeuralODE(f, sensitivity='adjoint', solver='ieuler').to(device)

learn = Learner(t_span, model)
trainer = pl.Trainer(min_epochs=200, max_epochs=300)
trainer.fit(learn)

I will get the RuntimeError by executing the above code snippet:

RuntimeError: Trying to backward through the graph a second time (or directly access saved tensors after they have already been freed). Saved intermediate values of the graph are freed when you call .backward() or autograd.grad(). Specify retain_graph=True if you need to backward through the graph a second time or if you need to access saved tensors after calling backward.

Changing adjoint to autograd prevented RuntimeError from happening, but the loss will not decrease during training. And I tried to modify the implicit Euler method by changing retain_graph from False to True, it didn't solve the issue. I think this issue has something to do with the LBFGS optimizer used by the library to find roots for the implicit integrator, but I don't really know how to fix it.

Any help on this issue is much appreciated! Thanks in advance!

Zymrael commented 2 years ago

Providing an update here based on our private chat, in case someone else is interested: this should be done using IFT at the fixed-point, implementing a custom backward for the implicit step.

@cyx96 graciously offered to help implement this

joglekara commented 2 years ago

for future readers who spend a few minutes trying to figure out what IFT is...

IFT = Implicit Function Theorem