Closed bhaveshsharma91 closed 6 months ago
Dear @bhaveshsharma91,
Thank you for choosing nnU-Net for your university assignment on medical image segmentation!
Data Format Compatibility and Dataset JSON Generation:
dataset.json
file, you can refer to our detailed guide available at Generating dataset.json. This documentation outlines the necessary steps and structure required to properly create this file, facilitating a smoother setup of your dataset for use with nnU-Net.Utilizing Pretrained Models:
Should you need any further assistance or have additional questions, do not hesitate to reach out.
Best regards, Karol Gotkowski
Dear Karol,
Thank you very much for your detailed response and guidance on preparing my dataset for the nnU-Net framework. Your advice on converting .nddr files to NIfTI using the nibabel library is greatly appreciated, as is the direction provided for generating the dataset.json file. I am also grateful for the recommendation to consider the pretrained models from the SegRap challenge, which I believe will be invaluable for my project on head and neck organ-at-risk segmentation.
I would like to seek further clarification on how to handle my dataset for the preprocessing stage within nnU-Net. Specifically, I have 42 cases, with each case comprising one CT image, one MRI image, and 30 segmentation maps. My questions are as follows:
Merging Segmentation Maps: For the preprocessing stage, should all 30 segmentation maps for a single case be merged into a single file? If so, could you recommend an efficient method or tool within nnU-Net or externally that can be used for this purpose?
Handling Different Image Shapes: The CT and MRI images in my dataset come in various shapes and sizes. Does the preprocessing stage in nnU-Net automatically handle the resizing of these images to ensure they are compatible with the network's requirements? Or is there a specific protocol I should follow to resize these images before feeding them into the nnU-Net pipeline?
Ensuring the data is correctly preprocessed is crucial for the success of my project, and your expertise would greatly aid in navigating these steps. I am looking forward to your guidance on these matters.
Thank you once again for your support and assistance.
Best regards, Bhavesh Sharma
On Wed, Mar 20, 2024 at 10:27 AM Karol Gotkowski @.***> wrote:
Dear @bhaveshsharma91 https://github.com/bhaveshsharma91,
Thank you for choosing nnU-Net for your university assignment on medical image segmentation!
1.
Data Format Compatibility and Dataset JSON Generation:
The .nddr format is not directly supported by nnU-Net. For medical imaging, nnU-Net predominantly works with NIfTI formats. To convert .nddr files into NIfTI, which is the default format for nnU-Net, I recommend using the nibabel library. This Python package provides support for reading and writing .nddr and NIfTI files, which should help in preparing your dataset.
- For generating the dataset.json file, you can refer to our detailed guide available at Generating dataset.json https://github.com/MIC-DKFZ/nnUNet/blob/master/documentation/dataset_format.md#datasetjson. This documentation outlines the necessary steps and structure required to properly create this file, facilitating a smoother setup of your dataset for use with nnU-Net. 2.
Utilizing Pretrained Models:
- Regarding pretrained models for head and neck organ-at-risk segmentation within the nnU-Net framework, we do not have a specific pretrained model available. However, I recommend considering the use of model weights from the winning solution of the SegRap challenge as a form of pretraining for your project. You can find these weights at SegRap 2023 Winning Solution https://github.com/Astarakee/segrap2023. While utilizing these pretrained weights might improve your results, please note that improvements are not guaranteed.
Should you need any further assistance or have additional questions, do not hesitate to reach out.
Best regards, Karol Gotkowski
— Reply to this email directly, view it on GitHub https://github.com/MIC-DKFZ/nnUNet/issues/2011#issuecomment-2009222795, or unsubscribe https://github.com/notifications/unsubscribe-auth/AULH6IWIPNLFEE5MZBQTMZDYZFQBPAVCNFSM6AAAAABEUTH6NGVHI2DSMVQWIX3LMV43OSLTON2WKQ3PNVWWK3TUHMZDAMBZGIZDENZZGU . You are receiving this because you were mentioned.Message ID: @.***>
Hey @bhaveshsharma91,
Best regards, Karol
Merging segmentation maps: Yes, you need to merge the segmentation maps into a single segmentation. I assume that each segmentation is a binary map representing a single class? You would just merge them using Numpy. There is no script for that in nnU-Net. If you are not sure how I would recommend searching on StackOverflow or asking ChatGPT.
It seems like we're dealing with a segmentation task where each label corresponds to a different organ in the Head and Neck (HaN) region. Given this, I'm unsure whether this scenario fits better as a multiclass or a multilabel segmentation problem. Could you provide some guidance or advice on distinguishing between these two segmentation approaches for this context?
[image: image.png]
[image: image.png]
[image: image.png]
On Wed, Mar 20, 2024 at 12:21 PM Karol Gotkowski @.***> wrote:
Hey @bhaveshsharma91 https://github.com/bhaveshsharma91,
- Merging segmentation maps: Yes, you need to merge the segmentation maps into a single segmentation. I assume that each segmentation is a binary map representing a single class? You would just merge them using Numpy. There is no script for that in nnU-Net. If you are not sure how I would recommend searching on StackOverflow or asking ChatGPT.
- Handling Different Image Shapes: nnU-Net is able to handle different image shapes automatically. It sound like you have always pairs of CT and MRI scans that show the same region but as a different imaging modality, right? If this is the case then I recommend to read the following subsection of the documentation on how to handle multiple modalities: Link https://github.com/MIC-DKFZ/nnUNet/blob/master/documentation/dataset_format.md#dataset-folder-structure
Best regards, Karol
— Reply to this email directly, view it on GitHub https://github.com/MIC-DKFZ/nnUNet/issues/2011#issuecomment-2009440665, or unsubscribe https://github.com/notifications/unsubscribe-auth/AULH6IXSDD7J475X24NH5NDYZF5MXAVCNFSM6AAAAABEUTH6NGVHI2DSMVQWIX3LMV43OSLTON2WKQ3PNVWWK3TUHMZDAMBZGQ2DANRWGU . You are receiving this because you were mentioned.Message ID: @.***>
Hey,
Could you please check the images you attempted to upload? They are not displayed for me. Regarding the labels, are the labels for head and the labels for neck overlapping?
Best regards, Karol
Can anyone help me to understand this, and suggest a solution?
I suppose my GPU is not powerfull enought o handle this training?
torch.cuda.OutOfMemoryError: CUDA out of memory. Tried to allocate 544.00 MiB. GPU 0 has a total capacty of 4.00 GiB of which 0 bytes is free. Of the allocated memory 5.79 GiB is allocated by PyTorch, and 626.70 MiB is reserved by PyTorch but unallocated. If reserved but unallocated memory is large try setting max_split_size_mb to avoid fragmentation. See documentation for Memory Management and PYTORCH_CUDA_ALLOC_CONF Exception in thread Thread-6: Traceback (most recent call last): File "C:\Users\bhave\anaconda3\envs\imagesegmentation\lib\threading.py", line 950, in _bootstrap_inner self.run() File "C:\Users\bhave\anaconda3\envs\imagesegmentation\lib\threading.py", line 888, in run self._target(*self._args, **self._kwargs) File "C:\Users\bhave\anaconda3\envs\imagesegmentation\lib\site-packages\batchgenerators\dataloading\nondet_multi_threaded_augmenter.py", line 125, in results_loop raise e File "C:\Users\bhave\anaconda3\envs\imagesegmentation\lib\site-packages\batchgenerators\dataloading\nondet_multi_threaded_augmenter.py", line 103, in results_loop raise RuntimeError("One or more background workers are no longer alive. Exiting. Please check the " RuntimeError: One or more background workers are no longer alive. Exiting. Please check the print statements above for the actual error message
Hey,
It appears your gpu has only 4GB of memory. nnU-Net requires about 10GB of GPU memory for training.
Best regards, Karol
I am currently working on a university assignment that involves medical image segmentation. I have chosen to work with the HaN-Seg dataset, which focuses on the head and neck organ-at-risk CT and MR segmentation. As I delve into this project, I've encountered a couple of challenges and would greatly appreciate any guidance or advice you can offer.
Data Format Compatibility and Dataset JSON Generation My dataset is in .nddr format. I understand nnU-Net typically works with DICOM, NIfTI, or similar formats for medical imaging.
Is .nddr format supported by nnU-Net? If not, could you recommend the best approach to convert .nddr files into a format compatible with nnU-Net? Generating dataset.json: I am also trying to figure out how to automatically generate the dataset.json file required by nnU-Net. Any suggestions on tools or scripts that could facilitate this process would be highly beneficial.
Given the specific focus of my project on head and neck organ-at-risk segmentation:
Are there any pretrained models available within the nnU-Net framework that are suitable for this task? If so, could you provide some insights on how to effectively utilize these models for the HaN-Seg dataset? I am keen on leveraging nnU-Net's capabilities for my project and ensuring that I adhere to best practices in medical image segmentation. Any advice, resources, or examples you could share would be immensely valuable.
Thank you for your time and assistance.