Open petersampodrias opened 1 month ago
Hi, we provide some exaple data so that you could explore our code.
Hi, the dataset format should be NPZ according to your codebase. However, the data preprocessing steps mentioned in SSL-ALPNet convert DCM files (2D slices for each subject ID) into NIfTI (3D) files for each ID. Should we convert the NIfTI files back to NPZ format for each slice (3D to 2D slices)? Doesn’t this seem redundant—2D to 3D to 2D? Could you please clarify whether we should convert DCM files directly to NPZ?
Hi, the dataset format should be NPZ according to your codebase. However, the data preprocessing steps mentioned in SSL-ALPNet convert DCM files (2D slices for each subject ID) into NIfTI (3D) files for each ID. Should we convert the NIfTI files back to NPZ format for each slice (3D to 2D slices)? Doesn’t this seem redundant—2D to 3D to 2D? Could you please clarify whether we should convert DCM files directly to NPZ?
Hi, the reason we resaved data to NPZ is to strictly follow the privious works' eval strategy. The NPZ file is obtained by https://github.com/DeepMed-Lab-ECNU/FS_MedSAM2/blob/2290785b840975b308cac3a8898cf6433c40ddb2/validation_wopred.py#L189 . If you clearly know the support & querry pair, you could directly convert DCM to NPZ.
Thanks for the response. I have one more query, if we want to train it a little, how should we create the training dataset? The validation_wopred.py file seems to be very specific on creating the validation dataset. Could you help me in creating training dataset? should we modify the training.py from SSL-ALPNet or just include the entire dataset in validation_wopred.py?
Hi, I wanted to know what is the approach to predict an image's masks using pairs of image and masks as support. Is there some required preprocessing? Do I have to save my data into some specific format or can I directly perform inference it?