Project-MONAI / MONAILabel

MONAI Label is an intelligent open source image labeling and learning tool.
https://docs.monai.io/projects/label
Apache License 2.0
626 stars 196 forks source link

MultiGPU usage problem #1507

Open gokceay opened 1 year ago

gokceay commented 1 year ago

Hi,

I was trying to segment the vertebrae from the CT files with MONAI radiology_full_ct-upgraded-HYBRID and radiology apps with these commands in a multi gpu (8 x NVIDIA T4) environment:

  1. monailabel start_server --app apps/radiology --studies apps/datasets/Task09_Spleen/imagesTs --conf models localization_spine,localization_vertebra,segmentation_vertebra

  2. monailabel start_server --app apps/radiology_full_ct-upgraded-HYBRID/ --studies datasets/Task09_Spleen/imagesTr --conf models segmentation_full_ct

When I run the first command and activate the MONAILabel plugin from the 3D Slicer on the left side of the screen I was not able to select multiGPU from the list:

image

First I have tried localization_spine and locate the spine like this:

image

It is not a good spine localization. How can I make it better? Resampling the input medical images to a lower spacing and retraining the pretrained network will solve the problem? If so, could you please share the pipeline for this case?

For the second step Localization vertebrae I also got the result like this:

image

Another question is does it matter the input nii.gz file is a colonoscopy CT, chest CT or a CT that is collected to visualize the spine? Maybe the intensity values are changing and this will affect the accuracy of the results? To get better results, do I need to do anything on my CT samples?

The third question is, when I run the vertebra_pipeline the MONAI did not utilize my all 8 GPU and give the error below:

image

What should I do to get better results and utilize multi-GPUs?

Thanks in advance

tangy5 commented 1 year ago

For single subject inference, currently we don't have the DDP or other model distributed setting that can run multi-GPU inference. For batchInference, we can do that. For training configurations, we support multi-GPU training, if you go to training options panel, you could select checkbox of the multi-GPU.

If you run out of memory when inference single subject, I suggest you could downsample the image to meet the GPU memory. Cause there is not much thing we can do to apply a sampler for data parallel, if you'd like to do mulit-GPU with batchInference, each GPU can run a subject at the same time. We can look into that.

For thevertebrae model performance issue, maybe @diazandr3s can better advice here. Thank you again for using MONAILabel. Hope above helps.

diazandr3s commented 1 year ago

Hi @gokceay,

Thanks for opening this issue.

When I run the first command and activate the MONAILabel plugin from the 3D Slicer on the left side of the screen I was not able to select multiGPU from the list:

As @tangy5 mentioned, multi-GPU support in MONAI Label is only for training.

monailabel start_server --app apps/radiology --studies apps/datasets/Task09_Spleen/imagesTs --conf models localization_spine,localization_vertebra,segmentation_vertebra

This is a multistage approach for vertebra segmentation. It is inspired by the model published in the work "Coarse to Fine Vertebrae Localization and Segmentation with SpatialConfiguration-Net and U-Net" - GitHub repo

It is not a good spine localization. How can I make it better? Resampling the input medical images to a lower spacing and retraining the pretrained network will solve the problem?

The model was trained on a portion of the VerSe dataset. As with other models in MONAI Label, they are examples. More training on the same dataset or other datasets might be needed.

If so, could you please share the pipeline for this case?

you should be able to retrain the models. Here you can find the trainer for all three models: https://github.com/Project-MONAI/MONAILabel/tree/main/sample-apps/radiology/lib/trainers

You can do this in Slicer as well. Another question is does it matter the input nii.gz file is a colonoscopy CT, chest CT or a CT that is collected to visualize the spine? Maybe the intensity values are changing and this will affect the accuracy of the results? To get better results, do I need to do anything on my CT samples?

In theory, it should work on any CT. However, this model isn't robust enough as it was trained on a small dataset.

Do you have access to another dataset? Happy to chat more about it. Let me know.

monailabel start_server --app apps/radiology_full_ct-upgraded-HYBRID/ --studies datasets/Task09_Spleen/imagesTr --conf models segmentation_full_ct

This is a different model. It was trained on the TotalSegmentator dataset for whole-body CT segmentation. Among the regions segmented are the vertebrae. This is a single-stage model.

Hope this helps

gokceay commented 1 year ago

@tangy5 @diazandr3s Thanks a lot for the detailed response.

@tangy5 After retraining the network with more data to get more accurate results, I would like to do multi-GPU with batchInference, it will help a lot. Would you guide me about how to do a multi-GPU with batchinference?

@diazandr3s I was thinking about combining data from various resources such as Verse2020 dataset, TotalSegmentator dataset, CKSpine1K dataset, TCIA vb.

If there is not enough labeled data, I will do semi automatic segmentation on Slicer, active learning on MONAI and retraining the multistage approach for vertebra segmentation. I am not sure which one will be more optimized solution for a better vertebrae segmentation. Which one should I start with? Do you have any suggestion? I have Nvidia RTX 4090 on my local machine. In addition to the vertebrae, I would like to get the ribs and other bones segmented with a better accuracy from the ct as well, maybe I should retrain the totalsegmentator with more data?

Thanks in advance

diazandr3s commented 1 year ago

Hi @gokceay,

Combining data from multiple sources sounds like a great idea. I'd suggest you start using the segmentation_full_ct model to get the predictions on the unlabeled volumes. Even for the ones with vertebrae segmentation - for the ones with vertebra segmentation, you can combine the predictions with the vertebrae' ground truth.

Then, you can do active learning by first correcting the predictions from these volumes and retraining.

Happy to further discuss this in a video call.

gokceay commented 1 year ago

Hi @diazandr3s

Thanks a lot for your help. Making a video call will greatly help. I will be sending an e-mail about it. My e-mail address: gguvenster@gmail.com

Thanks in advance