CERN / TIGRE

TIGRE: Tomographic Iterative GPU-based Reconstruction Toolbox
BSD 3-Clause "New" or "Revised" License
527 stars 180 forks source link

Reconstruction with projections in png format #518

Closed yunsuper123 closed 4 months ago

yunsuper123 commented 4 months ago

Expected Behavior

Hello, I would like to inquire if TIGRE supports the use of projection images in PNG format for reconstruction. If this is feasible, how should I introduce them? Thank you!

Actual Behavior

I was trying to use the DRR images from varying angles that I have obtained from another dataset for reconstruction. These DRRs are in png format so the pixel values range from 0 to 255. However, I noticed that the "projections" from tigre.Ax() in the demo code has a different scale of intensity. I already preprocess the png files to make them consistent with the directions in tiger.Ax(). Could you please shed some light on how to tackle the pixel values? Any instructions would be greatly appreciated!

Code to reproduce the problem (If applicable)

import tigre
import numpy as np
from tigre.utilities import sample_loader
from tigre.utilities import CTnoise
import tigre.algorithms as algs
from matplotlib import pyplot as plt
from tigre.utilities.im3Dnorm import im3DNORM
import imageio
import os
import SimpleITK as sitk
from PIL import Image
import cv2

def load_projections(folder_path, num_images):
    images = []
    for i in range(num_images):
        filename = os.path.join(folder_path, f'xray{i:04d}.png')
        if os.path.exists(filename):
            image = imageio.imread(filename, pilmode='L')
            images.append(image)
        else:
            print(f"File not found: {filename}")
            break
    return np.array(images)

def save_mha(original, recon, filename):
    recon_image = sitk.GetImageFromArray(recon)
    recon_image.CopyInformation(original)
    sitk.WriteImage(recon_image, filename)

# Lets create a geomtry object
geo = tigre.geometry()

# Distances
geo.DSD = 946.746  
geo.DSO = 538.52 
# Detector parameters
geo.nDetector = np.array([128, 128])  
#eo.dDetector = np.array([500/128, 500/128]) 
geo.dDetector = np.array([1000/128, 1000/128])  
geo.sDetector = geo.nDetector * geo.dDetector  
# Image parameters
geo.nVoxel = np.array([252, 512, 512]) 
geo.sVoxel = np.array([315, 326, 326])  
geo.dVoxel = geo.sVoxel / geo.nVoxel  
# Offsets
geo.offOrigin = np.array([0, 0, 0])  
geo.offDetector = np.array([0, 0])  

# Auxiliary
geo.accuracy = 0.5 
geo.COR = 0 
geo.rotDetector = np.array([0, 0, 0])  
geo.mode = "cone"  

#%% Load data and generate projections
# define angles
angles = 72

img_path = "imgfolder"
projections = load_projections(img_path, angles)

epsilon = (
    im3DNORM(tigre.Ax(algs.fdk(projections, geo, angles), geo, angles) - projections, 2)
    * 0.15
)
alpha = 0.001
ng = 10

# Other optional parameters
lmbda = 1
lambdared = 1  # you generally want 1
alpha_red = 0.95
ratio = 0.95
verb = True
qualmeas = ["RMSE"]
it = 50
blocksize = 10

imgOSASDPOCS, qualityOSASDPOCS = algs.os_asd_pocs(
    projections,
    geo,
    angles,
    it,  # these are very important
    tviter=ng,
    maxl2err=epsilon,
    alpha=alpha,  # less important.
    lmbda=lmbda,
    lmbda_red=lambdared,
    rmax=ratio,
    verbose=verb,
    Quameasopts=qualmeas,
    # OSC params
    blocksize=blocksize,
    computel2=True,
)

plt.subplot(211)
plt.plot(np.log10(qualityOSASDPOCS[0, :]).T)
plt.title("Convergence")
plt.xlabel("Iteration")
plt.ylabel("$ log_{10}(|Ax-b|) $")
plt.subplot(212)
plt.plot(np.log10(qualityOSASDPOCS[1, :]).T)
plt.title(f"Evolution of RMSE ({blocksize})")
#plt.gca().legend(("OSASDPOCS"))
plt.xlabel("Iteration")
plt.ylabel("$ log_{10}(RMSE) $")
plt.subplots_adjust(hspace=0.5)

plot_save_path = "test.png"
plt.savefig(plot_save_path)

#plt.show()

# Save the OS-ASD-POCS reconstruction result
save_mha(original, imgOSASDPOCS, test.mha')

Specifications

AnderBiguri commented 4 months ago

Hi @yunsuper123 Units in CT imaging are meaningless, as the detectors essentially give you in general digital data like yours. This is in fact why Hounsfield Units were invented!

So don't worry about the units/scale of the projections, it will only just produce a CT image of different scale, nothing else

yunsuper123 commented 4 months ago

Hi @AnderBiguri I'd like to quantitatively compare the reconstruction results with the ground truth (-1000 to +1000), so I think I need to make sure the intensity scale of the reconstruction is consistent?

Thank you for your response!!

AnderBiguri commented 4 months ago

Your ground truth is in HUs, but CT recon happens in "linear attenuation units", which is an arbitrary non-negative value.

HUs, as said, are not taken from the scanner. A HU image is created after recon, by taking an area of the image that has air (generally attenuation = 0) and making that -1000, then taking something with water (generally calibrated for each machine) and making that 0.

None of this is related to recon, its a post processing normalization step that is calibrated for each machine, so nothing to do really with reconstruction.

yunsuper123 commented 4 months ago

I see. So if I want to compare two reconstruction results quantitatively (let's say calculate mean square error), the typical way is to normalize the voxel value to 0-1 and do the math, right? Thank you again for your response, I greatly appreciate it!!

AnderBiguri commented 4 months ago

@yunsuper123 not exactly, no.

Depends in your situation. You have reconstructions by a machine, in HUs. If you want to use these images in HUs to simulate projections, then reconstruct and compare, then yes. Normalize, and then compare to this scale.

If you have a reconstruction in HUs and a sinogram from the machine you are in a bit of a pickle. Mostly because mu->HU conversion is likely a calibrated process for your machine that is hidden from you, so in this case the best thing to do would likely be to reconstruct the FDK image yourself, rather than relying on the recon from the machine in HUs, as you don't have the mu->HU conversion code, and this code is likely algorithm dependant anyways.

In short: only use HUs if you are in control of the mu->HU conversion

Hope this helps! :)

PD: mu == linear attenuation coefficient.