odlgroup / odl

Operator Discretization Library https://odlgroup.github.io/odl/
Mozilla Public License 2.0
374 stars 105 forks source link

pytorch autograd depreciated #1616

Open lofux opened 2 years ago

lofux commented 2 years ago

Hi!

I am trying to run the jupyter notebook: part3_learned_reconstruction_pytorch.ipynb from the odlworkshop . I use pytorch 1.7.0 and cuda 10.1.

I get the following error message:


RuntimeError Traceback (most recent call last)

in 5 6 test_images = Variable(images) ----> 7 test_data = generate_data(test_images) in generate_data(images) 19 """ 20 torch.manual_seed(123) ---> 21 data = fwd_op_mod(images) 22 data += Variable(torch.randn(data.shape)).type_as(data) 23 return data ~/.local/lib/python3.6/site-packages/torch/nn/modules/module.py in _call_impl(self, *input, **kwargs) 725 result = self._slow_forward(*input, **kwargs) 726 else: --> 727 result = self.forward(*input, **kwargs) 728 for hook in itertools.chain( 729 _global_forward_hooks.values(), ~/.local/lib/python3.6/site-packages/odl/contrib/torch/operator.py in forward(self, x) 393 results = [] 394 for i in range(x_flat_xtra.data.shape[0]): --> 395 results.append(self.op_func(x_flat_xtra[i])) 396 397 # Reshape the resulting stack to the expected output shape ~/.local/lib/python3.6/site-packages/torch/autograd/function.py in __call__(self, *args, **kwargs) 158 def __call__(self, *args, **kwargs): 159 raise RuntimeError( --> 160 "Legacy autograd function with non-static forward method is deprecated. " 161 "Please use new-style autograd function with static forward method. " 162 "(Example: [https://pytorch.org/docs/stable/autograd.html#torch.autograd.Function)")](https://pytorch.org/docs/stable/autograd.html#torch.autograd.Function)%22)) RuntimeError: Legacy autograd function with non-static forward method is deprecated. Please use new-style autograd function with static forward method. (Example: https://pytorch.org/docs/stable/autograd.html#torch.autograd.Function) I wonder if it is possible to update odl for the new version of autograd in pytorch? Kind Regards, Louise
ZeliangM commented 2 years ago

I meet the same problem!

swpeng24 commented 1 year ago

Just lower your pytorch version to below 1.3, but I am also solving this problem that occurs above version 1.3, and I am researching...

JevgenijaAksjonova commented 1 year ago

Hi,

I am using almost the latest pytorch (1.12.1.post201) and I have no such problem with the binding. I imagine that part3_learned_reconstruction_pytorch.ipynb can have some outdated code, however the following code runs as expected:

import matplotlib.pyplot as plt import numpy as np import odl import torch from odl.contrib.torch import OperatorModule

print(torch.version)

X = odl.uniform_discr([-10, -10], [10, 10], (100,100)) x = odl.phantom.shepp_logan(X)

apart = odl.uniform_partition(0, 2np.pi, 100) dpart = odl.uniform_partition(-30, 30, 100) geometry = odl.tomo.FanBeamGeometry(apart=apart, dpart=dpart, src_radius=15, det_radius=15) operator = odl.tomo.RayTransform(X, geometry) pt_op = OperatorModule(operator) pt_x = torch.from_numpy(x.asarray().reshape(1,1,x.shape)).cuda()

plt.imshow(pt_op(pt_x).detach().cpu().numpy().squeeze())

ZeliangM commented 1 year ago

Hi,

I am using almost the latest pytorch (1.12.1.post201) and I have no such problem with the binding. I imagine that part3_learned_reconstruction_pytorch.ipynb can have some outdated code, however the following code runs as expected:

import matplotlib.pyplot as plt import numpy as np import odl import torch from odl.contrib.torch import OperatorModule

print(torch.version)

X = odl.uniform_discr([-10, -10], [10, 10], (100,100)) x = odl.phantom.shepp_logan(X)

apart = odl.uniform_partition(0, 2np.pi, 100) dpart = odl.uniform_partition(-30, 30, 100) geometry = odl.tomo.FanBeamGeometry(apart=apart, dpart=dpart, src_radius=15, det_radius=15) operator = odl.tomo.RayTransform(X, geometry) pt_op = OperatorModule(operator) pt_x = torch.from_numpy(x.asarray().reshape(1,1,x.shape)).cuda()

plt.imshow(pt_op(pt_x).detach().cpu().numpy().squeeze())

I still meet error when use your example.(pytorch==1.10.0,1.8.0)