Open rtavenar opened 5 months ago
Not sure if it helps, but if I turn the torch.autograd.Function
into a standard function (and hence remove code for the backward pass), everything works smoothly.
@rtavenar: I have a similar problem, can you explain what you actually mean by turning torch.autograd.Function
into a standard function? So what did you change in which code?
@jduerholt You can create a function called interp1d
and copy the contents of forward pass. Pytorch autograd automatically handles the backward pass. This fixes the error.
Thanks for the hint, I will give it a try. Maybe it is even faster than my current solution ;)
Hi,
First of all, thanks for this package that is very useful.
I have a usecase in which I would like to optimize over positions on the output grid of
interp1d
(i.e.xnew
).Here is a short example to figure out what I want to do:
My goal would be to optimize positions in
grid_to_be_optimized
via gradient descent, but computation of the gradient fails with:My package versions are:
I'd be happy to give a hand, but I have no idea where to start, to be honest...