csyfjiang / M4oE

13 stars 2 forks source link

Regarding the training guideline #3

Open IceIce1ce opened 1 month ago

IceIce1ce commented 1 month ago

Hi authors,

First of all, congratulations on your paper which was published at MICCAI as well as your great code. In your training code, it requires train and val.csv files, I found that mkcsv.py file can create these files. However, the original dataset only contains .gz files, not .npz files. How can I obtain .npz files for creating these csv files?

csyfjiang commented 1 month ago

Hi authors,

First of all, congratulations on your paper which was published at MICCAI as well as your great code. In your training code, it requires train and val.csv files, I found that mkcsv.py file can create these files. However, the original dataset only contains .gz files, not .npz files. How can I obtain .npz files for creating these csv files?

Thank you for your interest in our paper and code, and for your kind words. We appreciate your congratulations on our MICCAI publication.

Regarding your question about the .npz files, I'd like to clarify that our project originally drew inspiration from the preprocessing methods used in nnUNet and MONAI. The .npz format is simply a compressed numpy format and is not strictly required for our approach.

In fact, you have the flexibility to adapt the preprocessing and data loading sections in our dataloader to suit your specific project needs. The .npz format was used in our implementation, but it's not mandatory. You can modify the dataloader to work with your .gz files directly or implement a preprocessing step that converts .gz to a format that works best for your setup.

The key is to ensure that your data is properly preprocessed and can be efficiently loaded into the model during training. Feel free to adjust our dataloader methods to accommodate the file format and preprocessing steps that are most appropriate for your project.

If you have any further questions about adapting the code to your needs, please don't hesitate to ask. We're here to help!