CERN / TIGRE

TIGRE: Tomographic Iterative GPU-based Reconstruction Toolbox
BSD 3-Clause "New" or "Revised" License
527 stars 180 forks source link

Inconsistencies in CT Reconstruction using SIRT algorithm #512

Closed yunsuper123 closed 4 months ago

yunsuper123 commented 6 months ago

Expected Behavior

After implementing SIRT, I expect the CT reconstruction volume to be the same as the ground truth CT.

Actual Behavior

I slightly adjusted demo 1,2,3 to create the geometry and data of a CT .mha file, then implemented SIRT to reconstruct CT volume with demo 7. Although I tried to make sure all the geometry variables were right, I still didn't get the right result shape. The first image is the ground truth CT shown in 3D slicer, and the second is my reconstruction. It's quite possible that I might be overlooking something, so I was wondering if you could shed some light on the potential issues of this inconsistency. Thank you in advance for your assistance and for the amazing contributions you've made to our community.

groundtruth SIRT

Code to reproduce the problem (If applicable)

import tigre
import numpy as np
from tigre.utilities import sample_loader
from tigre.utilities import CTnoise
import tigre.algorithms as algs
from matplotlib import pyplot as plt
import imageio
import os
import SimpleITK as sitk

def save_mha(image, filename):
    itkimage = sitk.GetImageFromArray(image)
    sitk.WriteImage(itkimage, filename)

# Lets create a geomtry object
geo = tigre.geometry()

# Distances
geo.DSD = 946.746  
geo.DSO = 538.52  
# Detector parameters
geo.nDetector = np.array([128, 128])  
#eo.dDetector = np.array([500/128, 500/128])  
geo.dDetector = np.array([1000/128, 1000/128])  
geo.sDetector = geo.nDetector * geo.dDetector  
# Image parameters
geo.nVoxel = np.array([225, 512, 512]) 
geo.sVoxel = np.array([281.25, 360, 360]) 
geo.dVoxel = geo.sVoxel / geo.nVoxel 
# Offsets
geo.offOrigin = np.array([0, 0, 0])  
geo.offDetector = np.array([0, 0])  

# Auxiliary
geo.accuracy = 0.5  # Variable to define accuracy of
geo.COR = 0  # y direction displacement for
geo.rotDetector = np.array([0, 0, 0])  # Rotation of the detector, by
geo.mode = "cone"  # Or 'parallel'. Geometry type.

#%% Load data and generate projections
# define angles
angles = np.linspace(0, 2 * np.pi, 72)

groundtruth = sitk.ReadImage("data.mha")
groundtruth = sitk.GetArrayFromImage(groundtruth)

# Simulate forward projection.
# To match with mathematical notation, the projection operation is called Ax
projections = tigre.Ax(groundtruth, geo, angles)

# Optional arguments for all of them
# ==========================================================================
lmbda = 1
lambdared = 0.999
initmode = None
verbose = True
qualmeas = ["RMSE", "SSD"]

# SIRT and SART both have no extra input parameters.
# =========================================================================
imgSIRT, qualitySIRT = algs.sirt(
    projections,
    geo,
    angles,
    20,
    lmbda=lmbda,
    lmbda_red=lambdared,
    verbose=verbose,
    Quameasopts=qualmeas,
    computel2=True,
)

print(imgSIRT.shape, imgSART.shape)
print(qualitySIRT.shape, qualitySART.shape)

# Save the SIRT reconstruction result
save_mha(imgSIRT, 'SIRT.mha')

Specifications

AnderBiguri commented 6 months ago

Hi @yunsuper123 ,

I don't see anything inherently wrong in the results you are showing, why are you worried?

You are reconstructing an image that is quite much bigger than the detector you are simulating (512 vs 128), so it will definitely not be very sharp. Also, you are only simulating 72 projections, which is quite low. The only thing I would say is that SIRT is quite slow converging as an algorithm, I would suggest more like 200 iterations than 20. Other algorithms like OS-SART or CLGS would produce results faster, within 20~50 iterations. I also recommend when exploring iterative algorithms to always compare against an FDK reconstruction, not only the ground truth, so you can see how the iterative algorithms perform compared to a standard clinical reconstruction.

AnderBiguri commented 6 months ago

On second look: are you worried about the craneo-caudal shape of the result image? That fully depends on your visualization right? I don't exactly remember how mha files work, but should't it have some information on the slice thickness? Likely this is not being saved, and therefore when opened by whatever software you are using for visualization, it assumes that pixels are the same size in all directions, which are not.