eigenvivek / DiffDRR

Auto-differentiable digitally reconstructed radiographs in PyTorch
https://vivekg.dev/DiffDRR
MIT License
148 stars 20 forks source link

Working with Nifti Files #225

Closed Alookima21 closed 6 months ago

Alookima21 commented 6 months ago

I am a beginner when it comes to this; I am an undergraduate student working on a project to generate 3D CT using limited XRays. However before training the model, I am using the LiTS CT Scan dataset to generate XRays using your method. I have attached the code below. Without moving around the voxel_spacing array I was getting a blank output. I have attached the code below and the resultant image. I bellieve it might be an issue regarding sdr and height, however not exactly sure that even if these are the issue, how do I find accurate values for these.

import matplotlib.pyplot as plt
import torch
print('Imported torch')
from diffdrr.drr import DRR
print('Imported DRR')
from diffdrr.data import load_example_ct
print('Imported load_example_ct')
from diffdrr.visualization import plot_drr
print('Imported plot_drr')

import nibabel as nib

ct_volume_path = '../Downloads/CT_Scans/volume-51.nii'
ct_volume_nifti = nib.load(ct_volume_path)
ct_volume_data = ct_volume_nifti.get_fdata()

#convert to tensor as dtype=float 32
ct_volume_tensor = torch.tensor(ct_volume_data, dtype=torch.float32)

# Get voxel spacing from the NIfTI file header
voxel_spacing = torch.tensor(ct_volume_nifti.header.get_zooms())
voxel_spacing[1] = voxel_spacing[2]
voxel_spacing[2] = voxel_spacing[0]

print(voxel_spacing)

# Initialize the DRR module for generating synthetic X-rays
device = torch.device("cpu")
print(f"Using device: {device}")

drr = DRR(
    ct_volume_tensor,   # CT volume loaded from the NIfTI file
    voxel_spacing,      # Voxel spacing (in mm)
    sdr=1200,                # Source-to-detector distance (redundant but required)
    height=1024,         # Image height (if width is not provided, the generated DRR is square)
    delx=0.1,            # Pixel spacing (in mm)
    renderer="trilinear"
).to(device)

# Set the camera pose with rotations (yaw, pitch, roll) and translations (x, y, z)
rotations = torch.tensor([[0.0, 0.0, 0.0]], device=device)
translations = torch.tensor([[0.0, 850.0, 0.0]], device=device)

# 📸 Also note that DiffDRR can take many representations of SO(3) 📸
# For example, quaternions, rotation matrix, axis-angle, etc...
img = drr(rotations, translations, parameterization="euler_angles", convention="ZXY",n_points=250)
plot_drr(img, ticks=False)
plt.show()

this is the image I am getting

Screenshot 2024-04-18 at 5 09 44 PM
eigenvivek commented 6 months ago

Hi @Alookima21 , which version of DiffDRR are you using?

The development version on the main branch implements a different geometry than the current release on PyPI (v0.3.12).

I would recommend installing the development version (see #223).

Alookima21 commented 6 months ago

I have installed the development version, and the tutorial notebooks are working. I am trying to run it on one of the CT scans from my dataset. I am loading the data and formatting it properly, however it throws an error img = drr(...):

ct_volume_path = '../Downloads/CT_Scans/volume-51.nii'
ct_volume_nifti = nib.load(ct_volume_path)
ct_volume_data = ct_volume_nifti.get_fdata()
ct_volume_data = ct_volume_data[None, ...]  # Add a singleton dimension at the end

# Create a TorchIO ScalarImage object
subject_dict = {
    'volume': torchio.ScalarImage(tensor=ct_volume_data),  #shape verified to be (1, 512, 512, 227)
    'mask': None,
    'density': None,
    'structures': None
}
subject = torchio.Subject(subject_dict)

# Initialize the DRR module for generating synthetic X-rays
device = torch.device("cuda" if torch.cuda.is_available() else "cpu")
drr = DRR(
    subject,  # A torchio.Subject object storing the CT volume, origin, and voxel spacing
    sdd=1020,  # Source-to-detector distance (i.e., the C-arm's focal length)
    height=200,  # Height of the DRR (if width is not seperately provided, the generated image is square)
    delx=2.0,  # Pixel spacing (in mm)
).to(device)

# Set the camera pose with rotations (yaw, pitch, roll) and translations (x, y, z)
rotations = torch.tensor([[0.0, 0.0, 0.0]], device=device)
translations = torch.tensor([[0.0, 850.0, 0.0]], device=device)
img = drr(rotations, translations, parameterization="euler_angles", convention="ZXY")   #throws error here
plot_drr(img, ticks=False)
plt.show()

The error it's throwing is:

---------------------------------------------------------------------------
AttributeError                            Traceback (most recent call last)
Cell In[61], [line 13](vscode-notebook-cell:?execution_count=61&line=13)
     [11](vscode-notebook-cell:?execution_count=61&line=11) rotations = torch.tensor([[0.0, 0.0, 0.0]], device=device)
     [12](vscode-notebook-cell:?execution_count=61&line=12) translations = torch.tensor([[0.0, 850.0, 0.0]], device=device)
---> [13](vscode-notebook-cell:?execution_count=61&line=13) img = drr(rotations, translations, parameterization="euler_angles", convention="ZXY")
     [14](vscode-notebook-cell:?execution_count=61&line=14) plot_drr(img, ticks=False)
     [15](vscode-notebook-cell:?execution_count=61&line=15) plt.show()

File [/opt/homebrew/lib/python3.11/site-packages/torch/nn/modules/module.py:1518](https://file+.vscode-resource.vscode-cdn.net/opt/homebrew/lib/python3.11/site-packages/torch/nn/modules/module.py:1518), in Module._wrapped_call_impl(self, *args, **kwargs)
   [1516](https://file+.vscode-resource.vscode-cdn.net/opt/homebrew/lib/python3.11/site-packages/torch/nn/modules/module.py:1516)     return self._compiled_call_impl(*args, **kwargs)  # type: ignore[misc]
   [1517](https://file+.vscode-resource.vscode-cdn.net/opt/homebrew/lib/python3.11/site-packages/torch/nn/modules/module.py:1517) else:
-> [1518](https://file+.vscode-resource.vscode-cdn.net/opt/homebrew/lib/python3.11/site-packages/torch/nn/modules/module.py:1518)     return self._call_impl(*args, **kwargs)

File [/opt/homebrew/lib/python3.11/site-packages/torch/nn/modules/module.py:1527](https://file+.vscode-resource.vscode-cdn.net/opt/homebrew/lib/python3.11/site-packages/torch/nn/modules/module.py:1527), in Module._call_impl(self, *args, **kwargs)
   [1522](https://file+.vscode-resource.vscode-cdn.net/opt/homebrew/lib/python3.11/site-packages/torch/nn/modules/module.py:1522) # If we don't have any hooks, we want to skip the rest of the logic in
   [1523](https://file+.vscode-resource.vscode-cdn.net/opt/homebrew/lib/python3.11/site-packages/torch/nn/modules/module.py:1523) # this function, and just call forward.
   [1524](https://file+.vscode-resource.vscode-cdn.net/opt/homebrew/lib/python3.11/site-packages/torch/nn/modules/module.py:1524) if not (self._backward_hooks or self._backward_pre_hooks or self._forward_hooks or self._forward_pre_hooks
   [1525](https://file+.vscode-resource.vscode-cdn.net/opt/homebrew/lib/python3.11/site-packages/torch/nn/modules/module.py:1525)         or _global_backward_pre_hooks or _global_backward_hooks
   [1526](https://file+.vscode-resource.vscode-cdn.net/opt/homebrew/lib/python3.11/site-packages/torch/nn/modules/module.py:1526)         or _global_forward_hooks or _global_forward_pre_hooks):
-> [1527](https://file+.vscode-resource.vscode-cdn.net/opt/homebrew/lib/python3.11/site-packages/torch/nn/modules/module.py:1527)     return forward_call(*args, **kwargs)
   [1529](https://file+.vscode-resource.vscode-cdn.net/opt/homebrew/lib/python3.11/site-packages/torch/nn/modules/module.py:1529) try:
   [1530](https://file+.vscode-resource.vscode-cdn.net/opt/homebrew/lib/python3.11/site-packages/torch/nn/modules/module.py:1530)     result = None

File [~/repos/DiffDRR/diffdrr/drr.py:129](...DiffDRR/diffdrr/drr.py:129), in forward(self, parameterization, convention, mask_to_channels, *args, **kwargs)
...
File [~/repos/DiffDRR/diffdrr/renderers.py:18](...DiffDRR/diffdrr/renderers.py:18), in Siddon.dims(self, volume)
     [17](...DiffDRR/diffdrr/renderers.py:17) def dims(self, volume):
---> [18](...DiffDRR/diffdrr/renderers.py:18)     return torch.tensor(volume.shape).to(volume) + 1

AttributeError: 'NoneType' object has no attribute 'shape'

I don't understand this as the volume shape is coming out perfectly fine in the above code when loading and formatting my data. Still can't pick out the issue

eigenvivek commented 6 months ago

Hi @Alookima21 , diffdrr.drr.DRR needs subject to have a density attribute (a conversion of the CT volume from HU to LAC units).

To read your NIFTI file, try subject = diffdrr.data.read('../Downloads/CT_Scans/volume-51.nii').

Does that work?

Alookima21 commented 6 months ago

That worked perfectly. However the results I am getting are not ideal. what are some steps I could take to improve these results xray thank you so much for diligently answering my queries.

eigenvivek commented 6 months ago

no problem - and that geometry looks correct! what's not ideal about the results? the appearance of the DRR?

Alookima21 commented 6 months ago

I may be incorrect, however these differ from the sample results given in the notebook in terms of the clarity in the bone structures and organs such as the lungs. But, if I am not wrong, this may be due to differing patients, CT Scan procedures, or differing number of slices per CT. However, I am wondering if there could be any possible improvements in terms of appearance

eigenvivek commented 6 months ago

Not much more to do now - all the renderings in the intro notebook were made with the same renderer that you're using now, so it's probably due to differences in the CT (https://vivekg.dev/DiffDRR/tutorials/introduction.html)

You could try segmenting your CT with TotalSegmentator (it's very easy to use their package), and then passing the segmentation mask along with the volume to the renderer. Then you could rendering different structures individually, like how the segmentations in the tutorial are generated.

I'm working on adding a few more physics-based augmentations to the package (e.g., different energy spectra dependent on the material of a voxel, similar to what DeepDRR does), but that won't be added for another week or so

Hope that helps! Changing the appearance of the DRR may not be necessary depending on what your downstream task is.

eigenvivek commented 6 months ago

Closing for now, feel free to reopen if you have more questions!

eigenvivek commented 6 months ago

Hi @Alookima21 , while I haven't gotten to fully physics-based image augmentations, I implemented a quick-and-dirty solution: https://vivekg.dev/DiffDRR/tutorials/introduction.html#changing-the-appearance-of-the-rendered-drrs

You can change the bone_attenuation_multiplier in the diffdrr.data.read function to make bone brighter or darker relative to soft tissue. Curious to see what it'd do to your DRRs!

fedeface98 commented 4 months ago

Hi Vivek!

There is the possibility somehow to optimize the bone attenuation multiplier with the parameters? And so change it in DRR function and not in the read function?

Thanks a lot!

eigenvivek commented 4 months ago

hi federica, this should be possible but it requires a little bit of hacking

basically, you need to set up your own torch.nn.Module for which bone_attenuation_multiplier is a torch.nn.Parameter

then, you need to call the diffdrr.data.transform_hu_to_density function on the original drr.volume tensor with your bone_attenuation_multiplier value

for each iteration, you'd call this transform, render DRRs, compare them to some ground truth image and compute the loss, then backpropogate to the bone_attenuation_multiplier parameter

one difficulty i envision is that i'm pretty sure diffdrr.data.transform_hu_to_density takes a scalar input for bone_attenuation_multiplier and not a tensor, so you may need to rewrite that function to take accept a tensor input