eigenvivek / DiffDRR

Auto-differentiable digitally reconstructed radiographs in PyTorch
https://vivekg.dev/DiffDRR
MIT License
147 stars 19 forks source link

Question: capturing the femur from knee joint height at an angle #298

Closed YassinAbdelrahman closed 3 months ago

YassinAbdelrahman commented 3 months ago

I am having trouble creating a DRR that captures the left femur in the following way: image

So placing the point source at knee joint height and tilting (or aiming) the point source so that the whole femur is projected, with the detector parallel to the femur.

Is this possible with DiffDRR? If so, do I use the standard translation and rotation parameters and DRR function as shown in the introduction? My code is as follows:

rotations = torch.tensor([[0.0, -0.5, 0.0]], device=device)
    translations = torch.tensor([[0.0, 850.0, -250.0]], device=device)
    print(image)
    img = tio.ScalarImage(os.path.join(input_folder, image))
    bounds = img.get_bounds()
    print(bounds)

    width = int(abs(bounds[1][1] - bounds[1][0])) + 20
    height = int((abs(bounds[2][1] - bounds[2][0]) + 150) / 2)

    sub = read(
        volume=img,
        orientation="AP",
        bone_attenuation_multiplier=9.0,
    )
    subject = tio.Subject(sub)
    drr = DRR(
        subject,  # A torchio.Subject object storing the CT volume, origin, and voxel spacing
        sdd=2560,  # Source-to-detector distance (i.e., the C-arm's focal length)
        height=height,
        width=width,  # Height of the DRR (if width is not seperately provided, the generated image is square)
        delx=2,  # Pixel spacing (in mm)
    ).to(device)
    # Set the camera pose with rotations (yaw, pitch, roll) and translations (x, y, z)
    drr.to(device)
    img = drr(
        rotations,
        translations,
        parameterization="euler_angles",
        convention="ZXY",
    )

    plot_drr(img, ticks=True)

This is an example of a CT scan I would use: ART_LOEX_001.nii.gz

Please let me know if you need more information to answer this question, thank you in advance!

Yassin

eigenvivek commented 3 months ago

Hi @YassinAbdelrahman , yup that should be totally doable. I can take a closer look at your particular CT this weekend, but one thing that might help in the interim are the 3D rendering features in DiffDRR. This will let you render DRRs in world coordinates relative to your CT scan, and should help you tweak the intrinsic/extrinsic parameters such that you get the view you're after. Examples of how to use the rendering functions are in this notebook.

YassinAbdelrahman commented 3 months ago

HI @eigenvivek, I would greatly appreciate it if you could take a look at my CT, thank you! I am not sure I quite understand the 3D rendering features so help with the intrinsic/extrinsic parameters would be greatly appreciated.

sarbabi commented 3 months ago

@eigenvivek Hi this is my question as well. Can you please have a look at CT image attached to Yassin's question and let us know what is the best way for doing DRR generation?

eigenvivek commented 3 months ago

Hi @YassinAbdelrahman and @sarbabi, I may be misunderstanding the exact viewing angle you're after, so just let me know if the following isn't right.

I started with the same X-ray geometry as the example in the README:

import matplotlib.pyplot as plt
import torch

from diffdrr.data import read
from diffdrr.drr import DRR
from diffdrr.visualization import plot_drr

subject = read("ART_LOEX_001.nii.gz", bone_attenuation_multiplier=5.0)
device = torch.device("cuda" if torch.cuda.is_available() else "cpu")
drr = DRR(subject, sdd=1020.0, height=200, delx=2.0).to(device)

A default frontal projection is already perpendicular to the knee joint:

rotations = torch.tensor([[0.0, 0.0, 0.0]], device=device)
translations = torch.tensor([[0.0, 850.0, 0.0]], device=device)
img = drr(rotations, translations, parameterization="euler_angles", convention="ZXY")
plot_drr(img, ticks=False)
plt.show()

download

We can make the pixel size bigger such that the femoral heads are in the field of view:

drr.set_intrinsics(delx=5.0, dely=5.0)

rotations = torch.tensor([[0.0, 0.0, 0.0]], device=device)
translations = torch.tensor([[0.0, 850.0, 0.0]], device=device)
img = drr(rotations, translations, parameterization="euler_angles", convention="ZXY")
plot_drr(img, ticks=False)
plt.show()

download-1

Finally, we can translate the X-ray source and detector such that it's centered on the left femur:

rotations = torch.tensor([[0.0, 0.0, 0.0]], device=device)
translations = torch.tensor([[-80.0, 850.0, 0.0]], device=device)
img = drr(rotations, translations, parameterization="euler_angles", convention="ZXY")
plot_drr(img, ticks=False)
plt.show()

download-2

Regarding 3D rendering, here's how you could use the functions in DiffDRR to visualize the 3D geometry of what we've rendered:

import pyvista

from diffdrr.pose import convert
from diffdrr.visualization import drr_to_mesh, img_to_mesh, labelmap_to_mesh, plot_drr

pyvista.start_xvfb()

# Make a mesh from the CT volume
ct = drr_to_mesh(subject, "surface_nets", threshold=0, extract_largest=False, verbose=False)

# Make a mesh from the camera and detector plane
pose = convert(rotations, translations, parameterization="euler_angles", convention="ZXY")
camera, detector, texture, principal_ray = img_to_mesh(drr, pose)

# Make the plot
plotter = pyvista.Plotter()
plotter.add_mesh(ct)
plotter.add_mesh(camera, show_edges=True, line_width=1.5)
plotter.add_mesh(principal_ray, color="lime", line_width=3)
plotter.add_mesh(detector, texture=texture)

# Render the plot
plotter.add_axes()
plotter.add_bounding_box()
plotter.show_bounds(grid="front", location="outer", all_edges=True)
plotter.export_html("render.html")

which produces a rendering like this

image

Working through this example helped me catch a few improvements that could be made to the code (#302 and #303), so be sure to pull the latest version of the development branch to run these examples. Thanks for helping me find these!

eigenvivek commented 3 months ago

Closing for now, feel free to reopen if you have any other questions.