odlgroup / odl

Operator Discretization Library https://odlgroup.github.io/odl/
Mozilla Public License 2.0
374 stars 105 forks source link

a difference between odl_torch and normal odl #1645

Open wuniii opened 5 months ago

wuniii commented 5 months ago

para_ini = initialization() fp, fbp ,op_norm= build_gemotry(para_ini) op_modfp = odl_torch.OperatorModule(fp) op_modfbp = odl_torch.OperatorModule(fbp) op_modpT = odl_torch.OperatorModule(fp.adjoint)

Above is the code that sets up my filtered backprojection and projection when processed on ndarray format and tensor format. I found that the effect is normal when fbp the projection on cpu through ndarray format, but the effect is much worse when processing on gpu through tensor format using op_modpT, is it because of the code writing problem?

leftaroundabout commented 3 months ago

Hello, sorry I missed this issue when it came in; I was on vacation.

Notice that Torch is currently not really supported at all; the existing code only contains an attempt from a long time ago which is kind of a hack and relies on copying back and forth between CPU and GPU.

But we are actively working on proper Torch integration again. It won't be in the upcoming release, but should be ready some time in autumn. The idea is that it should then be possible to directly use computations involving high-level ODL operators, but with the implementation in PyTorch (completely on GPU if desired) and the ability to auto-differentiate as if you were directly working with Torch tensors.

If you cannot wait for this you may check out the experimental version at https://github.com/leftaroundabout/odl/tree/backend/pytorch-arrays. Many operators and solvers already work there, but I have not yet attempted to use gradient descent and similar.

wuniii commented 3 months ago

邮件我已收到,会尽快阅读