daviddmc / NeSVoR

NeSVoR is a package for GPU-accelerated slice-to-volume reconstruction.
MIT License
69 stars 16 forks source link

Cannot reproduce results on FeTA using your slicing strategy. #8

Closed Edenzzzz closed 1 year ago

Edenzzzz commented 1 year ago

Hi, I tried using nesvor to reconstruct volume with stacks of slices sampled from the FeTA dataset. Stack assessment gave high ncc ( > 0.92). I enabled registration and bias field correction and sampled 3 orthogonal stacks with gaps over the whole volume using modified code from you slice_acquistion tests, but the results are terrible. Even worse, if I enable segmentation the whole stack is masked out. It's hard to infer how else stacks could be extracted from the paper. Here's my code and results:

nesvor reconstruct \
--input-stacks stacks/feta_stack1_gap=3.nii.gz stacks/feta_stack2_gap=3.nii.gz stacks/feta_stack3_gap=3.nii.gz \
--output-volume volumes/feta_rec_nesvor_gap=3.nii.gz \
--bias-field-correction \
--registration svort
 def get_cg_recon_test_data(self, angles, volume: torch.Tensor=None, return_stack_as_list=True):

        vs = 240 if volume is None else max(volume.shape) # z axis

        gap = s_thick = 3#not the slice thickness
        res = 1
        res_s = 1.5
        n_slice = int((np.sqrt(3) * vs) / gap) + 4
        ss = int((np.sqrt(3) * vs) / res_s) + 4

        if volume is None:
            print(f"Creating a Phantom of {(vs, vs, vs)} with resolution {res}")
            if self.phantom == "default":
                volume = phantom3d(n=vs, phantom="shepp_logan")
            else:
                volume = shepp_logan((vs, vs, vs))
        else:
            print('Using provided volume')

        volume = torch.tensor(volume, dtype=torch.float32).cuda().contiguous()
        volume = pad_to_length(volume, vs)
        volume = volume.unsqueeze(0).unsqueeze(0) #preprocess for svort

        psf = get_PSF(res_ratio=(res_s / res, res_s / res, s_thick / res)).cuda()

        stacks = []
        transforms = []

        #get slices for each triplet of angles
        for i in range(len(angles)):
            angle = (
                torch.tensor([angles[i]], dtype=torch.float32)
                .cuda()
                .expand(n_slice, -1)
            )
            tz = (
                torch.arange(0, n_slice, dtype=torch.float32).cuda()
                - (n_slice - 1) / 2.0
            ) * gap
            tx = ty = torch.ones_like(tz) * 0.5 #pick middle coordinates? but why 0.5 not 0.5 * dim
            t = torch.stack((tx, ty, tz), -1)
            transform = RigidTransform(torch.cat((angle, t), -1), trans_first=True)
            # sample slices
            mat = mat_update_resolution(transform.matrix(), 1, res)
            slices = slice_acquisition(
                mat, volume, None, None, psf, (ss, ss), res_s / res, False, False
            )
            if return_stack_as_list:
                slices = slices.squeeze()
            stacks.append(slices) #shape: (h, 1, w, l)
            transforms.append(transform)

        params = {
            "psf": psf,
            "slice_shape": (ss, ss),
            "res_s": res_s,
            "res_r": res,
            "interp_psf": False,
            "volume_shape": (vs, vs, vs),
        }

        #save stacks to test Nesvor
        if len(stacks) == 3:
            self._save_stacks(stacks, gap=gap)
image
daviddmc commented 1 year ago

Hi @Edenzzzz, did you try to reconstruct using the default parameters, like this?

nesvor reconstruct \
    --input-stacks /incoming/stack-1.nii.gz ... /incoming/stack-N.nii.gz \
    --thicknesses <thick-1> ... <thick-N> \
    --output-volume /outgoing/volume.nii.gz

The segmentation model (MONAIfbs) is used to remove the maternal tissue in the images and the model was trained on data with maternal tissue. Therefore, If you apply it to an image without background. It might remove some of the brain.

Did you check the simulated stack data? Could you share the simulated files with me?

Edenzzzz commented 1 year ago

feta_stack3_gap=3.nii.gz feta_stack2_gap=3.nii.gz feta_stack1_gap=3.nii.gz

Thanks for your reply. Here are my stack files sliced from FeTA and I'm looking into the options you mentioned.

daviddmc commented 1 year ago

Hi, you may try to save each stack as follows.

from nesvor.image import Stack

Stack(
    slices, 
    transformation=transform, 
    resolution_x=res_s, 
    resolution_y=res_s,
    thickness=s_thick,
    gap=gap,
).get_volume().save('stack.nii')

You may find more information about the Stack class here.

Edenzzzz commented 1 year ago

Hi, you may try to save each stack as follows.

from nesvor.image import Stack

Stack(
    slices, 
    transformation=transform, 
    resolution_x=res_s, 
    resolution_y=res_s,
    thickness=s_thick,
    gap=gap,
).get_volume().save('stack.nii')

You may find more information about the Stack class here.

Thanks for the reply! The result now looks closer to the input. However just to be sure, based on my code is res_s always 1.5 or the resolution for the source volume (0.5 mm for FeTA) and res is the desired output stack resolution (0.8 according to the paper)? I've found only setting res_s to 1.5 produces reasonable results.

This repo is well-structured and promising, but I think users who are not expert in MRI processing may benefit from a bit more instruction on processing stacks and specifying resolutions. Thanks for your contribution!

daviddmc commented 1 year ago

Yes, res_s is the in-plane resolution of the simulated images, so it should be greater than the resolution of the source volume and the reconstructed volume..