jqmcginnis / multi_contrast_inr

Source Code for our MICCAI Paper: Single-subject Multi-contrast MRI Super-resolution via Implicit Neural Representations.
https://arxiv.org/abs/2303.15065
21 stars 2 forks source link

Upload of data-preparation to create LR images #3

Closed majasomething closed 6 months ago

majasomething commented 1 year ago

Hello,

thank you for uploading the project!

As I would like to reproduce the results in your paper I would like to use the same pre-processing pipeline. Could you upload the code for processing the high-resolution MR data to LR (especially regarding naming, metadata, and resolution)?

Thank you!

jqmcginnis commented 1 year ago

Hello @majasomething,

Thank you very much for your interest in the paper. We will prepare the scripts and upload them (soon) here. In the meantime, if you want to work with BRATS and MSSEG, you could (already) apply for the data on the respective websites. While the datasets are open source/public, they still require registration to obtain the data.

Anyways, we are working on this and hope to get back to you soon!

Have a nice weekend,

Cheers, Julian

jqmcginnis commented 1 year ago

Hi @majasomething,

sorry this is taking longer than anticipated. This still has a high priority for me - however, I am a bit swamped with preparing MICCAI and two talks during my stay in Canada, and I would like to provide a thoroughly tested version. I will try to update this as soon as possible. Thank you for your understanding!

majasomething commented 1 year ago

Hi Julian,

thanks for letting me know about the current status. No worries and good luck with the MICCAI preparation!

Best, Maja

11710615 commented 1 year ago

Hi, Thanks for your meaningful work for isotropous mri reconstruction. I am trying to reproduce the results, but the module is not implemented in the dataset.py.
Could you upload the relevant code. Thanks.

jqmcginnis commented 1 year ago

@11710615 thank you letting us know. I just updated the codebase and tested the four different configs for one Brats example - it should work! :) Let me know if you run into any other problems !

We computed all of the MICCAI experiments in a different GitHub repo which we are currently using for new developments, and I will be adding the pre-processing and baselines asap. Thanks for your patience @majasomething @11710615

Cheers, Julian

11710615 commented 1 year ago

Thanks for your quick response to the problem. Your professionalism is truly admirable. Good luck with your miccai presentation.

11710615 commented 1 year ago

Hi,

  1. I have one question remaining regarding the data preprocessing. In your dataloader, it appears that LR (Low-Resolution) and GT (Ground Truth) have the same dimensions. Could you clarify whether you set the unsampled points in LR to 0 or if you use other interpolation methods, such as linear or cubic interpolation? Additionally, I'm curious about the format of the file "*mask.nii.gz." What's the difference between the mask for LR and GT? Whether the LR mask assigns which points are sampled in LR (sampled point set as 1, unsampled point set as 0) and GT mask assigns the foreground regions (brain).
  2. The second question is whether each subject has its own model trained individually?? Thank you in advance for your response.
majasomething commented 1 year ago

Hi @11710615,

maybe I can help answer your questions. As far as I understood:

  1. The LR images and the corresponding masks are rescaled with a factor of 4 in the respective dimension (the affine matrix needs to be adapted as well). (Please refer to the Appendix in the paper for further details)

  2. As I understood the model is trained for each subject individually.

@jqmcginnis: Please correct me, if I misunderstood anything.

jqmcginnis commented 1 year ago

Hi @11710615 @majasomething

Sorry for the late reply, I did not see the notification, apologies for that!

1. HR -> LR Images:

In your dataloader, it appears that LR (Low-Resolution) and GT (Ground Truth) have the same dimensions. Could you clarify whether you set the unsampled points in LR to 0 or if you use other interpolation methods, such as linear or cubic interpolation?

LR and GT should not have the same dimensions, as we already feed in the downsampled niftis for the LR images :slightly_smiling_face: We do not mask out points for this, but use spline interpolation to downsample the images from isotropic (GT / HR) to anisotropic LR images. We save these as LR images and use these for the input.

Both niftis, mask and mask_LR actually display the same brainmask, however as we have different dimensions for isotropic images (e.g. 160/224/160) and e.g. anistropic (160/224/40) we use both for easier access to the brainmask, i.e. mask_LR is a downsampled version of the HR brain mask. The masks help to learn only the relevant parts of the brain as most a lot of the image content is background, you do not need them but it speeds up the training.

Exactly @majasomething - we downsample the cropped images respectively by a factor of 4. This can be done e.g. in nibabel:

def resample_nib(img, voxel_spacing=(1, 1, 1), order=3):
    """Resamples the nifti from its original spacing to another specified spacing

    Parameters:
    ----------
    img: nibabel image
    voxel_spacing: a tuple of 3 integers specifying the desired new spacing
    order: the order of interpolation

    Returns:
    ----------
    new_img: The resampled nibabel image 

    """
    # resample to new voxel spacing based on the current x-y-z-orientation
    aff = img.affine
    shp = img.shape
    zms = img.header.get_zooms()
    # Calculate new shape
    new_shp = tuple(np.rint([
        shp[0] * zms[0] / voxel_spacing[0],
        shp[1] * zms[1] / voxel_spacing[1],
        shp[2] * zms[2] / voxel_spacing[2]
        ]).astype(int))
    new_aff = nib.affines.rescale_affine(aff, shp, voxel_spacing, new_shp)
    new_img = nip.resample_from_to(img, (new_shp, new_aff), order=order, cval=-1024)
    print("[*] Image resampled to voxel size:", voxel_spacing)
    return new_img
  1. Yes exactly, we train a separate model for each subject - that's one of the main selling points of our paper :+1:

Thank you for your patients!

ZiqianHuan9 commented 10 months ago

Hello @jqmcginnis

Thank you for sharing the projects, that is really impressive work!

I would like to reproduce the work with the same pre-processing steps. I have accessed to the BRATS dataset, but I'm still confused about the input of the network. I tried to downsample the T1.nii file for one subject, and get the data_dict as your data utilize shown, but it seems not work. For example, if using the Brats dataset and use the best performance model, Could you please clarify the LR and GT shapes between pre-processing, and the shapes before the data get into the network? That would be super helpful!

Thank you in advance! Looking forward to your reply.

jqmcginnis commented 10 months ago

@ZiqianHuan9 @11710615 @majasomething

Thank you all for your incredible patience.

I just uploaded instructions for the BRats experiments here. Please let me know if these instructions are helpful and clear. After collecting feedback, I will extend it to the MSSEG experiment as well.

Thank you once again, and if any issues arise, please let me know!

ZiqianHuan9 commented 10 months ago

@ZiqianHuan9 @11710615 @majasomething

Thank you all for your incredible patience.

I just uploaded instructions for the BRats experiments here. Please let me know if these instructions are helpful and clear. After collecting feedback, I will extend it to the MSSEG experiment as well.

Thank you once again, and if any issues arise, please let me know!

@jqmcginnis Thank you so much for your amazing work. I have followed the instruction and produced the data. The instructions are clear and absolutely helpful.

Thanks again for your help