Closed rtx98 closed 1 year ago
Hi @rtx98,
Thanks for your interest in our work. As stated in the paper, we split the OASIS dataset into 255, 10 and 149 volumes for training, validation and test sets.
The validation Dice score in the Log files is different to the one reported in the paper. Specifically, we only report the Dice score of 23 subcortical structures in the paper, but the Dice score in the Log files is the average of 35 subcortical structures. If you want to know which anatomical structures are involved, you may refer to Figure 4 of our paper.
Hi @cwmok, Thanks a lot! So should I keep the first 255 volumes for training, the next 10 for validation and the last 149 for testing like the below code?
train_imgs = sorted(glob.glob(datapath + "/OASIS_OAS1_*_MR1/norm.nii.gz"))[:255]
val_imgs = sorted(glob.glob(datapath + "/OASIS_OAS1_*_MR1/norm.nii.gz"))[255:265]
test_imgs = sorted(glob.glob(datapath + "/OASIS_OAS1_*_MR1/norm.nii.gz"))[265:]
Hi @rtx98,
Yes.
Can you please provide the training and validation split used in the paper? For example the training validation split of the OASIS dataset for the Atlas-Based Registration (OASIS) task.
In the file Train_C2FViT_pairwise.py, there is a commented code which uses
# imgs = sorted(glob.glob(datapath + "/OASIS_OAS1_*_MR1/norm.nii.gz"))[255:259]
# labels = sorted(glob.glob(datapath + "/OASIS_OAS1_*_MR1/seg35.nii.gz"))[255:259]
I suppose the validation dice score in the Log files is based on this? Is this the validation split used in this paper? I could not match the best dice score reported in the paper and the score in the log file.I am really sorry if I understood something wrong in any way. I am trying to reproduce the results of the paper and so I need the correct training-validation split used. Also, thank you for your help and congratulations for this awesome work!