aliutkus / torchinterp1d

1D interpolation for pytorch
BSD 3-Clause "New" or "Revised" License
162 stars 19 forks source link

Error in the gradient computation: RuntimeError: One of the differentiated Tensors appears to not have been used in the graph. #23

Open rtavenar opened 1 month ago

rtavenar commented 1 month ago

Hi,

First of all, thanks for this package that is very useful.

I have a usecase in which I would like to optimize over positions on the output grid of interp1d (i.e. xnew).

Here is a short example to figure out what I want to do:

import torch
from torch.autograd import grad
from torchinterp1d import interp1d

torch.manual_seed(0)

n = 20
original_grid = torch.linspace(0, torch.pi, n)
x = torch.cos(original_grid)

indices = torch.arange(0, n, step=4)
inputs = x[indices]

grid_to_be_optimized = torch.Tensor([0.1, 0.5, 0.7, 0.9, 1.1])
grid_to_be_optimized.requires_grad_()
# At convergence, we hope to get 
# grid_to_be_optimized ~= original_grid[indices]

loss_fn = torch.nn.MSELoss()
n_steps = 100
for _ in range(n_steps):
    outputs = interp1d(original_grid, x, grid_to_be_optimized)
    loss = loss_fn(inputs, outputs)
    grad(loss, [grid_to_be_optimized], allow_unused=True)

My goal would be to optimize positions in grid_to_be_optimized via gradient descent, but computation of the gradient fails with:

/Users/rtavenar/Documents/recherche/src/gradient_based_dtw/venv/lib/python3.10/site-packages/torch/nn/modules/loss.py:535: UserWarning: Using a target size (torch.Size([1, 5])) that is different to the input size (torch.Size([5])). This will likely lead to incorrect results due to broadcasting. Please ensure they have the same size.
  return F.mse_loss(input, target, reduction=self.reduction)
Traceback (most recent call last):
  File "/Users/rtavenar/Documents/recherche/src/gradient_based_dtw/tests.py", line 24, in <module>
    grad(loss, [grid_to_be_optimized], allow_unused=True)
  File "/Users/rtavenar/Documents/recherche/src/gradient_based_dtw/venv/lib/python3.10/site-packages/torch/autograd/__init__.py", line 412, in grad
    result = _engine_run_backward(
  File "/Users/rtavenar/Documents/recherche/src/gradient_based_dtw/venv/lib/python3.10/site-packages/torch/autograd/graph.py", line 744, in _engine_run_backward
    return Variable._execution_engine.run_backward(  # Calls into the C++ engine to run the backward pass
  File "/Users/rtavenar/Documents/recherche/src/gradient_based_dtw/venv/lib/python3.10/site-packages/torch/autograd/function.py", line 301, in apply
    return user_fn(self, *args)
  File "/Users/rtavenar/Documents/recherche/src/gradient_based_dtw/venv/lib/python3.10/site-packages/torchinterp1d/interp1d.py", line 155, in backward
    gradients = torch.autograd.grad(
  File "/Users/rtavenar/Documents/recherche/src/gradient_based_dtw/venv/lib/python3.10/site-packages/torch/autograd/__init__.py", line 412, in grad
    result = _engine_run_backward(
  File "/Users/rtavenar/Documents/recherche/src/gradient_based_dtw/venv/lib/python3.10/site-packages/torch/autograd/graph.py", line 744, in _engine_run_backward
    return Variable._execution_engine.run_backward(  # Calls into the C++ engine to run the backward pass
RuntimeError: One of the differentiated Tensors appears to not have been used in the graph. Set allow_unused=True if this is the desired behavior.

My package versions are:

% pip show torch
Name: torch
Version: 2.3.1
Summary: Tensors and Dynamic neural networks in Python with strong GPU acceleration
Home-page: https://pytorch.org/
Author: PyTorch Team
Author-email: packages@pytorch.org
License: BSD-3
Location: /Users/rtavenar/Documents/recherche/src/gradient_based_dtw/venv/lib/python3.10/site-packages
Requires: filelock, fsspec, jinja2, networkx, sympy, typing-extensions
Required-by: torchinterp1d

% pip show torchinterp1d
Name: torchinterp1d
Version: 1.1
Summary: An interp1d implementation for pytorch
Home-page: https://github.com/aliutkus/torchinterp1d
Author: Antoine Liutkus
Author-email: antoine.liutkus@inria.fr
License: 
Location: /Users/rtavenar/Documents/recherche/src/gradient_based_dtw/venv/lib/python3.10/site-packages
Requires: torch
Required-by: 

I'd be happy to give a hand, but I have no idea where to start, to be honest...

rtavenar commented 4 weeks ago

Not sure if it helps, but if I turn the torch.autograd.Function into a standard function (and hence remove code for the backward pass), everything works smoothly.