LLNL / LEAP

comprehensive library of 3D transmission Computed Tomography (CT) algorithms with Python API and fully integrated with PyTorch
https://leapct.readthedocs.io
MIT License
74 stars 8 forks source link

About the the differentiability of BackProjection #46

Closed mit-mit-pg closed 2 weeks ago

mit-mit-pg commented 4 weeks ago

Hi, thank you for the amazing work! I really enjoy to use it. I have a question about the differentiability of Backprojection.

When I tried to calculate the gradient after BackProjection, RuntimeError: One of the differentiated Tensors appears to not have been used in the graph. has occured.

Below is a simple script to reproduce the issue. If “forward_project ” is set to True, there is no problem, but if it is set to False (=using BackProjcection), an error occurs in my environment. (I am aware that this code, which calculates the loss of the projected image itself, is pointless, but I put it because I thought it would make it easy to understand the problem.)

At first I thought that the computational graph was not generated correctly through backprojection function. Or am I using the library incorrectly?

I would really appreciate it if you could let me know. Thanks in advance!

import numpy as np
import torch
from leaptorch import Projector

device = torch.device("cuda:0")

# define projector
forward_project = True #Or False
proj = Projector(
    forward_project=forward_project, use_static=True, 
    use_gpu=True, gpu_device=device, batch_size=1)

# set geometory
numCols = 256
numAngles = 2*int(360*numCols/1024)
pixelSize = 0.5*512/numCols
numRows = 1
proj.leapct.set_parallelbeam(
    numAngles, numRows, numCols, pixelSize, pixelSize, 
    0.5*(numRows-1), 0.5*(numCols-1),
    proj.leapct.setAngleArray(numAngles, 180.0))
proj.leapct.set_default_volume()
proj.allocate_batch_data()

# generate input(dummy)
rng = np.random.default_rng()
if forward_project:
  x = rng.random(size=(1,256,256)) # size of the image
else:
  x = rng.random(size=(180,1,256)) # size of the projection 
x = torch.from_numpy(x).to(device).unsqueeze(0)
x.requires_grad = True

# get projection
y = proj(x)

# calculate gradient
grad = torch.autograd.grad(y.mean(), x, retain_graph=True, create_graph=True)[0]
print(grad, grad.sum())
kylechampley commented 4 weeks ago

It looks like your variable x is float64. Make sure you do this: if forward_project: x = rng.random(size=(1,256,256),dtype=np.float32) # size of the image else: x = rng.random(size=(180,1,256),dtype=np.float32) # size of the projection

Regardless, I get an error as well. @hkimdavis could you look into this?

hkimdavis commented 3 weeks ago

@mit-mit-pg could you pull the main branch again to see if it works?

mit-mit-pg commented 3 weeks ago

@kylechampley Thanks for checking my comment! That's my mistake, the variable should be float32.

@hkimdavis Due to personal reasons, it will be a little while before I can confirm, but I greatly appreciate the quick update. I will comment as soon as I get results.

mit-mit-pg commented 2 weeks ago

I'm really sorry for the late reply. Thanks to your update, the problem seems to be solved!! In my environment, the backpropagation went fine.

I think this issue can be closed now!

kylechampley commented 2 weeks ago

Glad it worked!