MungoMeng / Registration-CorrMLP

[CVPR2024 Oral && Best Paper Candidate] CorrMLP: Correlation-aware MLP-based networks for deformable medical image registration
GNU General Public License v3.0
44 stars 2 forks source link

Dataset details #6

Open Atik-Ahamed opened 2 weeks ago

Atik-Ahamed commented 2 weeks ago

Dear Authors, Excellent works! In your paper, you mentioned using 2656 brain MRI images from four public datasets (ADNI, ABIDE, ADHD, IXI). However, those data sources have many more images than what you have used. I was wondering, how did you select particularly 2656 images. Can you please release those sets of images? Or, can you please provide guidance on how to download those particular sets of images?

Thanks in advance!

MungoMeng commented 2 weeks ago

Hi, thanks for your interests in our work! We collected the training set in 2019, so the included datasets (ADNI, ABIDE, ADHD, IXI) were extended much in the past five years (and we did not update). There are also some scans being excluded as their skull stripping processing is unsuccessfully. Actually, it's unnecessary to strictly follow our data settings. You can collect your own training set, including all your available images. Our method is not optimized for any specific datasets and could be directly used in your data.

Atik-Ahamed commented 2 weeks ago

Thanks for mentioning more details.

2250432015 commented 5 days ago

I am writing to seek clarification on the "train_pairs.npy" and "valid_pairs.npy" files associated with your work. Could you please provide insights into how the data pairs (train_pairs and valid_pairs) in these files were generated?If available, could you direct me to any detailed documentation or usage guidelines for these files 微信截图_20240908171107 微信截图_20240908171056

MungoMeng commented 5 days ago

Hi, thanks for your interest in our work!
Personally, I suggest you just adopt our code in network.py, which is the core technical contribution. The other code can be customed for different datasets and data structures.
 In our settings, the "train_pairs.npy" and "valid_pairs.npy" are two files indicating which image pairs will be used for training and validation. Within the file is a list, such as [[filename_1, filename_2], [filename_3, filename_4], [filename_5, filename_6]], which includes three image pairs for training/validation. All these filenames_1/2/3/4/5/6 should be placed in the data_dir. In addition, the filename_1/2/3/4/5/6 have also been preprocessed as npz files, which can be directly loaded by np.load (Please refer to the code in datagenerator.py).
 Overall, these data loading code is highly customed. That's the reason why I suggest you just use our network code in your own data-loading code framework.

xxxh111 commented 4 days ago

hi,may I ask if you could provide detailed information on preprocessing, a more accurate process or code? Thank you very much.