neuropoly / totalspineseg

Segmentation of vertebrae, intervertebral discs, spinal cord and CSF from MRI images.
5 stars 0 forks source link

Add Sacrum, Enhance SC Seg & Expand DA for Multi-Contrast #33

Open yw7 opened 1 week ago

yw7 commented 1 week ago

This pull request introduces several significant improvements to our model training process and dataset handling:

NathanMolinier commented 1 week ago

I'm currently following the readme and running the installation and I noticed different things:

NathanMolinier commented 1 week ago

Regarding the installation of the packages:

However, I had an version error probably related to the installation of the package nnunetv2

Screenshot 2024-06-27 at 11 49 46

To fix this, I recreated a virtual environment and only ran the totalspineseg package installation. This time I did not get any errors so I will try to execute the next steps with this new environment. I'm thinking the extra torch installation might not be needed.

yw7 commented 1 week ago

I'm currently following the readme and running the installation and I noticed different things:

  • we are not bound to a specific python version
  • we are no bound to specific versions of the packages either in the pyproject.toml Maybe it is fine but I'm also thinking that it could lead to unexpected errors happening with the newest versions of the packages. We could also keep not specific versions for the main branch and only fix versions when doing releases.

regarding pytohn there is limitation. https://github.com/neuropoly/totalspineseg/blob/395e347bc02260b6ae8658f229d60280f6cdd142/pyproject.toml#L4C1-L4C26 But you're right. Specifying the version could prevent errors however leading to ling time reinstalling all the packages. We can add specific versions or range in pyproject.toml

yw7 commented 1 week ago

Regarding the installation of the packages:

  • I installed torch using the command pip3 install torch torchvision torchaudio
  • Then I installed the totalspineseg package and the dependencies by running python -m pip install -e .

However, I had an version error probably related to the installation of the package nnunetv2 Screenshot 2024-06-27 at 11 49 46

To fix this, I recreated a virtual environment and only ran the totalspineseg package installation. This time I did not get any errors so I will try to execute the next steps with this new environment. I'm thinking the extra torch installation might not be needed.

The problem is that some cuda need specific version of pytorch so nneed to do it as describe in their website. Beside this is also how it is recommanded in nnUnet: https://github.com/MIC-DKFZ/nnUNet/blob/master/documentation/installation_instructions.md#installation-instructions.

NathanMolinier commented 1 week ago

regarding pytohn there is limitation. https://github.com/neuropoly/totalspineseg/blob/395e347bc02260b6ae8658f229d60280f6cdd142/pyproject.toml#L4C1-L4C26

Maybe we could precise this constraint when creating the environment at the beginning of the README ?

NathanMolinier commented 1 week ago

The problem is that some cuda need specific version of pytorch so nneed to do it as describe in their website. Beside this is also how it is recommanded in nnUnet: https://github.com/MIC-DKFZ/nnUNet/blob/master/documentation/installation_instructions.md#installation-instructions.

You are right indeed ! I wonder how we could make this installation more robust.

I'm retrying with a conda environment instead.

NathanMolinier commented 1 week ago

I am getting an error about a missing package called funseg when running bash "$TOTALSPINESEG"/scripts/prepare_nnunet_datasets.sh

Screenshot 2024-06-27 at 13 46 37
yw7 commented 1 week ago

I am getting an error about a missing package called funseg when running bash "$TOTALSPINESEG"/scripts/prepare_nnunet_datasets.sh

Screenshot 2024-06-27 at 13 46 37

your'e right. there was a problem with the pyproject. I've pushed a fix

NathanMolinier commented 1 day ago

I just ran this command:

bash "$TOTALSPINESEG"/scripts/prepare_nnunet_datasets.sh

and noticed that the generated folder is taking more than 800 Go. Also, I was not able to run completely the command due to the size limitation on my device.

Screenshot 2024-07-04 at 09 32 34

Why is the data used for step 2 a lot bigger than step 1 ?

So I think we should first warn the user about the total storage needed for the full command and maybe create a flag to limit the total data augmentation ? Also, how can I specify the step for which I want to generate the data ?

yw7 commented 3 hours ago
  • we are not bound to a specific python version Maybe we could precise this constraint when creating the environment at the beginning of the README ?

I've update the ERADME.

  • we are no bound to specific versions of the packages either in the pyproject.toml Maybe it is fine but I'm also thinking that it could lead to unexpected errors happening with the newest versions of the packages. We could also keep not specific versions for the main branch and only fix versions when doing releases.

I've opened an issue for this: https://github.com/neuropoly/totalspineseg/issues/36

and noticed that the generated folder is taking more than 800 Go. Also, I was not able to run completely the command due to the size limitation on my device.

I've update the README to include hardware specifications for training.

Why is the data used for step 2 a lot bigger than step 1 ?

This is regarding this issue: https://github.com/neuropoly/totalspineseg/issues/30

So I think we should first warn the user about the total storage needed for the full command and maybe create a flag to limit the total data augmentation ? Also, how can I specify the step for which I want to generate the data ?

Done! see README.