Closed majasomething closed 7 months ago
Hello @majasomething,
Thank you very much for your interest in the paper. We will prepare the scripts and upload them (soon) here. In the meantime, if you want to work with BRATS and MSSEG, you could (already) apply for the data on the respective websites. While the datasets are open source/public, they still require registration to obtain the data.
Anyways, we are working on this and hope to get back to you soon!
Have a nice weekend,
Cheers, Julian
Hi @majasomething,
sorry this is taking longer than anticipated. This still has a high priority for me - however, I am a bit swamped with preparing MICCAI and two talks during my stay in Canada, and I would like to provide a thoroughly tested version. I will try to update this as soon as possible. Thank you for your understanding!
Hi Julian,
thanks for letting me know about the current status. No worries and good luck with the MICCAI preparation!
Best, Maja
Hi,
Thanks for your meaningful work for isotropous mri reconstruction.
I am trying to reproduce the results, but the
Could you upload the relevant code. Thanks.
@11710615 thank you letting us know. I just updated the codebase and tested the four different configs for one Brats example - it should work! :) Let me know if you run into any other problems !
We computed all of the MICCAI experiments in a different GitHub repo which we are currently using for new developments, and I will be adding the pre-processing and baselines asap. Thanks for your patience @majasomething @11710615
Cheers, Julian
Thanks for your quick response to the problem. Your professionalism is truly admirable. Good luck with your miccai presentation.
Hi,
Hi @11710615,
maybe I can help answer your questions. As far as I understood:
The LR images and the corresponding masks are rescaled with a factor of 4 in the respective dimension (the affine matrix needs to be adapted as well). (Please refer to the Appendix in the paper for further details)
As I understood the model is trained for each subject individually.
@jqmcginnis: Please correct me, if I misunderstood anything.
Hi @11710615 @majasomething
Sorry for the late reply, I did not see the notification, apologies for that!
1. HR -> LR Images:
In your dataloader, it appears that LR (Low-Resolution) and GT (Ground Truth) have the same dimensions. Could you clarify whether you set the unsampled points in LR to 0 or if you use other interpolation methods, such as linear or cubic interpolation?
LR and GT should not have the same dimensions, as we already feed in the downsampled niftis for the LR images :slightly_smiling_face: We do not mask out points for this, but use spline interpolation to downsample the images from isotropic (GT / HR) to anisotropic LR images. We save these as LR images and use these for the input.
Both niftis, mask and mask_LR actually display the same brainmask, however as we have different dimensions for isotropic images (e.g. 160/224/160) and e.g. anistropic (160/224/40) we use both for easier access to the brainmask, i.e. mask_LR is a downsampled version of the HR brain mask. The masks help to learn only the relevant parts of the brain as most a lot of the image content is background, you do not need them but it speeds up the training.
Exactly @majasomething - we downsample the cropped images respectively by a factor of 4. This can be done e.g. in nibabel:
def resample_nib(img, voxel_spacing=(1, 1, 1), order=3):
"""Resamples the nifti from its original spacing to another specified spacing
Parameters:
----------
img: nibabel image
voxel_spacing: a tuple of 3 integers specifying the desired new spacing
order: the order of interpolation
Returns:
----------
new_img: The resampled nibabel image
"""
# resample to new voxel spacing based on the current x-y-z-orientation
aff = img.affine
shp = img.shape
zms = img.header.get_zooms()
# Calculate new shape
new_shp = tuple(np.rint([
shp[0] * zms[0] / voxel_spacing[0],
shp[1] * zms[1] / voxel_spacing[1],
shp[2] * zms[2] / voxel_spacing[2]
]).astype(int))
new_aff = nib.affines.rescale_affine(aff, shp, voxel_spacing, new_shp)
new_img = nip.resample_from_to(img, (new_shp, new_aff), order=order, cval=-1024)
print("[*] Image resampled to voxel size:", voxel_spacing)
return new_img
Thank you for your patients!
Hello @jqmcginnis
Thank you for sharing the projects, that is really impressive work!
I would like to reproduce the work with the same pre-processing steps. I have accessed to the BRATS dataset, but I'm still confused about the input of the network. I tried to downsample the T1.nii file for one subject, and get the data_dict as your data utilize shown, but it seems not work. For example, if using the Brats dataset and use the best performance model, Could you please clarify the LR and GT shapes between pre-processing, and the shapes before the data get into the network? That would be super helpful!
Thank you in advance! Looking forward to your reply.
@ZiqianHuan9 @11710615 @majasomething
Thank you all for your incredible patience.
I just uploaded instructions for the BRats experiments here. Please let me know if these instructions are helpful and clear. After collecting feedback, I will extend it to the MSSEG experiment as well.
Thank you once again, and if any issues arise, please let me know!
@ZiqianHuan9 @11710615 @majasomething
Thank you all for your incredible patience.
I just uploaded instructions for the BRats experiments here. Please let me know if these instructions are helpful and clear. After collecting feedback, I will extend it to the MSSEG experiment as well.
Thank you once again, and if any issues arise, please let me know!
@jqmcginnis Thank you so much for your amazing work. I have followed the instruction and produced the data. The instructions are clear and absolutely helpful.
Thanks again for your help
Hello,
thank you for uploading the project!
As I would like to reproduce the results in your paper I would like to use the same pre-processing pipeline. Could you upload the code for processing the high-resolution MR data to LR (especially regarding naming, metadata, and resolution)?
Thank you!