Project-MONAI / MONAILabel

MONAI Label is an intelligent open source image labeling and learning tool.
https://docs.monai.io/projects/label
Apache License 2.0
567 stars 185 forks source link

RuntimeError when "brats_mri_segmentation_v0.2.1" from monaibundle is used. #1051

Open PranayBolloju opened 1 year ago

PranayBolloju commented 1 year ago

Describe the bug MONAI Label server is giving the following error when "brats_mri_segmentation_v0.2.1" is used for brain tumor segmentation.

RuntimeError: Given groups=1, weight of size [16, 4, 3, 3, 3], expected input[1, 240, 240, 240, 160] to have 4 channels, but got 240 channels instead

To Reproduce Steps to reproduce the behavior:

  1. pip install monailabel
  2. monailabel apps --download --name monaibundle --output apps
  3. monailabel datasets --download --name Task01_BrainTumour --output datasets
  4. monailabel start_server --app apps/monaibundle --studies datasets/Task01_BrainTumour/imagesTr --conf models brats_mri_segmentation_v0.2.1
  5. Run the model in 3D slicer with any image from the dataset.

Expected behavior Segmentation should be displayed in 3D slicer.

Screenshots image image

Environment

Ensuring you use the relevant python executable, please paste the output of:

python -c 'import monai; monai.config.print_debug_info()'

================================ Printing MONAI config...

MONAI version: 1.0.0 Numpy version: 1.22.4 Pytorch version: 1.12.1+cpu MONAI flags: HAS_EXT = False, USE_COMPILED = False, USE_META_DICT = False MONAI rev id: 170093375ce29267e45681fcec09dfa856e1d7e7 MONAI file: C:\Users\Admin\AppData\Local\Programs\Python\Python39\lib\site-packages\monai__init__.py

Optional dependencies: Pytorch Ignite version: 0.4.10 Nibabel version: 4.0.2 scikit-image version: 0.19.3 Pillow version: 9.2.0 Tensorboard version: 2.10.0 gdown version: 4.5.1 TorchVision version: 0.13.1+cpu tqdm version: 4.64.0 lmdb version: 1.3.0 psutil version: 5.9.1 pandas version: 1.4.3 einops version: 0.4.1 transformers version: NOT INSTALLED or UNKNOWN VERSION. mlflow version: NOT INSTALLED or UNKNOWN VERSION. pynrrd version: 0.4.3

tangy5 commented 1 year ago

HI @PranayBolloju ,

For the BRATS bundle, each data contains 4 channels as input volume. The brats_mri_segmentation_v0.2.1 needs a pre-processing step for BRATS data later than 2018. For the data you have downloaded from Task01, four modalities MRI images are already in one NIFTI file, but the channel dimension is at the last, e.g., (240, 240, 160, 4), the 4 is at index 3 as the input data. A solution is to preprocess the data to compatible with the bundle input: transpose the image to (4, 240, 240,160).

Thanks for reporting this. We'd better to add note in the bundle Readme or MONAI Label side to remind users on pre-processing BRATS data. Hope this helps to solve you problem.

PranayBolloju commented 1 year ago

Hi @tangy5 ,

Thanks for the response. Can you suggest a way to preprocess the data i.e. transpose images?

diazandr3s commented 1 year ago

@tangy5, does the input to the bundle _brats_mri_segmentationv0.2.1 need to be as channel first?

Do the transforms AsChannelFirstd or AsChannelLastd help?

Perhaps we only need to add this argument when loading the images: https://github.com/Project-MONAI/MONAI/blob/dev/monai/transforms/io/dictionary.py#L128

Here is where this can be added: https://github.com/Project-MONAI/model-zoo/blob/dev/models/brats_mri_segmentation/configs/inference.json#L37 as well as in training: https://github.com/Project-MONAI/model-zoo/blob/dev/models/brats_mri_segmentation/configs/train.json#L59

diazandr3s commented 1 year ago

Hi @PranayBolloju,

I have tried myself this model and I've got the same error.

I've also changed the LoadImage args and managed to get a prediction. I think the quality of the model can be easily improved. Please watch this video:

https://user-images.githubusercontent.com/11991079/194732741-6d55c171-0eb6-4661-97fc-8fa0004897be.mp4

One thing you could do is first update both the inference and train files (add _ensure_channelfirst arg) and then re-train the model using the _Task01BrainTumour dataset.

Please follows these steps: https://github.com/Project-MONAI/MONAILabel/discussions/1055#discussioncomment-3830237

BTW, there is another unsolved issue regarding multimodality/multiparametric images in Slicer. When a NIfTI file has more than one modality, Slicer reads only one.

NIfTI can be messy and that's why I make Slicer not consider the orientation. Ugly solution :/

MONAI Label does support multiparametric, but Slicer can't read multiple images when loaded in a single NIfTI image. More of this here: https://github.com/Project-MONAI/MONAILabel/pull/729#discussion_r872369612

tangy5 commented 1 year ago

Hi @PranayBolloju,

I have tried myself this model and I've got the same error.

I've also changed the LoadImage args and managed to get a prediction. I think the quality of the model can be easily improved. Please watch this video:

multi-modality-orientation.mp4 One thing you could do is first update both the inference and train files (add _ensure_channelfirst arg) and then re-train the model using the _Task01BrainTumour dataset.

Please follows these steps: #1055 (comment)

BTW, there is another unsolved issue regarding multimodality/multiparametric images in Slicer. When a NIfTI file has more than one modality, Slicer reads only one.

NIfTI can be messy and that's why I make Slicer not consider the orientation. Ugly solution :/

MONAI Label does support multiparametric, but Slicer can't read multiple images when loaded in a single NIfTI image. More of this here: #729 (comment)

Thanks @diazandr3s , I got same of loading multimodality data in Slicer. You solution looks good, we might need to add to monaibundel Readme on using the BRATS bundle, both the images and reminder of Slicer loading multi-channel images.

PranayBolloju commented 1 year ago

Hi @diazandr3s

Thanks for the video. I have tried the suggestions and got the prediction. The segmentation looks fine in 3D but nothing comes up in the other slides. image

diazandr3s commented 1 year ago

Thanks for the update, @PranayBolloju

As you can see from the video (minute ~1:11), I proposed an ugly solution (discard orientation) for MONAI Label to load the multimodality images in Slicer.

I was wondering if all is absolutely needed all modalities for your use case. Otherwise, I'd suggest working with a single modality as it avoids this change from the Slicer module perspective.

Let us know

PranayBolloju commented 1 year ago

Hi @diazandr3s ,

If it is possible to do Brain Hemorrhage or Tumor segmentation with equal accuracy when single modality or multimodality images are used, then I suppose we don't need to use multimodality images.

diazandr3s commented 1 year ago

Hi @diazandr3s ,

If it is possible to do Brain Hemorrhage or Tumor segmentation with equal accuracy when single modality or multimodality images are used, then I suppose we don't need to use multimodality images.

Hi @PranayBolloju,

Brain Hemorrhage and Tumor segmentation are two different tasks and they use different image modalities. AFAIK, for brain hemorrhage segmentation you employ CT images while for brain tumor segmentation MR images are more commonly used.

PranayBolloju commented 1 year ago

Hi @diazandr3s ,

Thanks for the insights. Is there any model available for Brain hemorrhage segmentation separately or can we use the same model used for tumor segmentation?

diazandr3s commented 1 year ago

Although no brain hemorrhage segmentation model (using CT images) is available in MONAI Label, it shouldn't be difficult for you to create one from a public dataset like this one: https://instance.grand-challenge.org/

You may find this useful as well: https://github.com/Project-MONAI/MONAILabel/discussions/1055#discussioncomment-3830237

Regarding brain tumor segmentation model (using MR images), you could the same Task01_BrainTumour but with a single modality.

Hope this helps,

PranayBolloju commented 1 year ago

Hi @diazandr3s

Thanks a lot for this information. The dataset you have provided the link to says its a forbidden dataset. Is there a way to get a dataset perhaps with label (hemorrhage) segmentation?

diazandr3s commented 1 year ago

That's strange. Have you registered for the challenge? If yes, and still doesn't work, have you contacted any of the organizers? https://instance.grand-challenge.org/Organizers/

PranayBolloju commented 1 year ago

Hi @diazandr3s I have registered for the challenge and they are asking to sign an agreement and send by email. I have done that too but did not get any reply from them.

diazandr3s commented 1 year ago

Hi @PranayBolloju,

I'd suggest you try another dataset like this one: https://www.kaggle.com/c/rsna-intracranial-hemorrhage-detection Hopefully, the organizers reply soon,

PranayBolloju commented 1 year ago

Hi @diazandr3s ,

Many thanks for the suggestions,

I have seen that dataset too, but it does not contain 3D images and also it does not have annotations. We would have to annotate hemorrhages by ourselves which might lead to wrong labeling. I was hoping to get a dataset already annotated by experts like the Task01_BrainTumor dataset or INSTANCE 2022 dataset.

In case I don't find any pre-annotated dataset, as the last resort I will attempt to label the segmentations using 3D slicer. There are couple of questions in this section.

diazandr3s commented 1 year ago

Hi @diazandr3s ,

Many thanks for the suggestions,

I have seen that dataset too, but it does not contain 3D images and also it does not have annotations. We would have to annotate hemorrhages by ourselves which might lead to wrong labeling. I was hoping to get a dataset already annotated by experts like the Task01_BrainTumor dataset or INSTANCE 2022 dataset.

In case I don't find any pre-annotated dataset, as the last resort I will attempt to label the segmentations using 3D slicer. There are couple of questions in this section.

* How to give DICOM images as an input to MONAI Label if working on a local machine(not a dicomweb server)?

* What model can we use to do 2D segmentation in monailabel for brain images?

Hi @PranayBolloju,

Regarding this:

I have seen that dataset too, but it does not contain 3D images and also it does not have annotations. We would have to annotate hemorrhages by ourselves which might lead to wrong labeling. I was hoping to get a dataset already annotated by experts like the Task01_BrainTumor dataset or INSTANCE 2022 dataset.

I fully understand. I hope the challenge organizers reply soon. That will facilitate things a lot. I was wondering whether you have access to expert manpower that can help to create these labels. Can I ask you what's the use case you have in mind once you get a the trained model?

  • How to give DICOM images as an input to MONAI Label if working on a local machine(not a dicomweb server)?

Currently, MONAI Label does not support DICOM images on a local folder. There are two options here: 1/ Convert the images to NRRD or NIfTI format and then work on a local folder or 2/ use a DICOM Web server.

  • What model can we use to do 2D segmentation in monailabel for brain images?

MONAI Label has examples for 2D segmentation such as the endoscopy and pathology app. The question is which viewer you want to integrate MONAI Label with.

You could also modify the radiology app to work on 2D as well. Please see discussion: https://github.com/Project-MONAI/MONAILabel/discussions/829

Hope this helps,

PranayBolloju commented 1 year ago

Hi @diazandr3s ,

Thank you for all the suggestions, it really helped.

I was following your suggestion and converted some images from NIFTI to DICOM using plastimatch. The command I have used is: plastimatch convert --patient-id patient1 --input BRATS_001.nii.gz --output-dicom BRATS_001

I have added _'ensure_channelfirst' arg in inference.json in monaibundle\brats_mri_segmentation_v0.2.1\configs.

Then I have started the monailabel server using this command: monailabel start_server -a apps\monaibundle -s <URL to Google DICOM Web server> -c models brats_segmentation_v0.2.1

I was able to see the images stored in Google DICOM web server in 3D slicer but when I tried to run inference I got the following error.

image

The same model was doing tumor segmentation perfectly when using local images i.e NIFTI.

diazandr3s commented 1 year ago

Hi @PranayBolloju,

Thanks for the update.

Did you make sure the DICOM images are multiparametric? I mean, does the input have the 4 modalities needed for the pretrained model?

I believe this is why you're getting this error.

Hope this helps,

PranayBolloju commented 1 year ago

Hi @diazandr3s ,

Thanks for the reply

I think the images converted to DICOM are not with 4 modalities. I have tried 2 ways to convert the images.

Is there a way to preserve the modality when converted to DICOM?

diazandr3s commented 1 year ago

Hi @diazandr3s ,

Thanks for the reply

I think the images converted to DICOM are not with 4 modalities. I have tried 2 ways to convert the images.

* By using export to dicom option from 3D slicer.

* plastimatch convert --patient-id patient1 --input BRATS_001.nii.gz --output-dicom BRATS_001

Is there a way to preserve the modality when converted to DICOM?

BRATS or Task01_BrainTumour are highly preprocessed datasets. They are skull-stripped and modality co-registered. It is not easy to find a similar dataset with these characteristics.

I'm not sure about this, but I think you can't save all modalities in a single DICOM file.

diazandr3s commented 1 year ago

Hi @diazandr3s ,

Thanks for the reply

I think the images converted to DICOM are not with 4 modalities. I have tried 2 ways to convert the images.

* By using export to dicom option from 3D slicer.

* plastimatch convert --patient-id patient1 --input BRATS_001.nii.gz --output-dicom BRATS_001

Is there a way to preserve the modality when converted to DICOM?

BRATS or Task01_BrainTumour are highly preprocessed datasets. They are skull-stripped and modality co-registered. It is not easy to find a similar dataset with these characteristics.

I'm not sure about this, but I think you can't save all modalities in a single DICOM file.

@wyli do you know if this is possible? Can we store 4 modalities in a single DICOM file?

PranayBolloju commented 1 year ago

Hi @diazandr3s

Thanks for the clarification.

I went ahead and trained a model with the converted images(i.e images that converted to single modality). The following are the changes I made in config files before training the model.

The model was trained successfully with 300 epochs and with average dice score of around 81 . But when I tried inference, only one of the label was being segmented.

image

Is there anything I have missed here?

diazandr3s commented 1 year ago

Hi @PranayBolloju,

Thanks for the update. It's good to see these results.

Does this happen to all test cases? Which modality did you use here?

Bear in mind that the tumor core (necrotic area) and edema (whole tumor) are visible on the other modalities (T1 + Contrast, T2, etc). That's mainly the reason for using different modalities.

PranayBolloju commented 1 year ago

Hi @diazandr3s Yes it is happening to all test cases.

These are a couple of images used for this model. BRATS_001.nii.gz BRATS_002.nii.gz

And these are the labels. BRATS_001.nii.gz BRATS_002.nii.gz

And a similar thing is happening while doing inference with pretrained model from monaibundle i.e _brats_mri_segmentationv0.2.1 on "Task01_BrainTumour" dataset. All three labels can be seen in segment editor but only one label is visible in the mask. image

diazandr3s commented 1 year ago

Thanks for clarifying this, @PranayBolloju.

It seems this issue comes from the post-processing transforms:

Please change this argument (https://github.com/Project-MONAI/model-zoo/blob/dev/models/brats_mri_segmentation/configs/inference.json#L76) to softmax=true and this (https://github.com/Project-MONAI/model-zoo/blob/dev/models/brats_mri_segmentation/configs/inference.json#L90) to argmax=true

They should work like this: https://github.com/Project-MONAI/MONAILabel/blob/main/sample-apps/radiology/lib/infers/deepedit.py#L118-L119

It seems the network is outputting 3 channels but only one is being shown in Slicer.

Please let me know how that goes.

PranayBolloju commented 1 year ago

Hi @diazandr3s

I have changed the suggested lines. image

And the segmentation looks like this. image

diazandr3s commented 1 year ago

Hi @PranayBolloju,

As I mentioned before, this bundle was designed to output three channels, one per label. 3D Slicer only takes the first one.

I've checked the training process and it seems it was designed to work like that - sigmoid per channel and to have one-hot representation of the output.

See the training transforms: https://github.com/Project-MONAI/model-zoo/blob/dev/models/brats_mri_segmentation/configs/train.json#L153-L160

I initially thought previous changes could solve the issue. But as the model wasn't trained using the softmax activation function, you get the result you're showing. @tangy5 can you please confirm this?

A solution for this is to keep the transforms as is and add another post transform that merges all three channels before this one: https://github.com/Project-MONAI/model-zoo/blob/dev/models/brats_mri_segmentation/configs/inference.json#L92

Another solution is to use the deepedit or segmentation model. Here are the instructions: https://www.youtube.com/watch?v=3HTh2dqZqew&list=PLtoSVSQ2XzyD4lc-lAacFBzOdv5Ou-9IA&index=3

Hope this helps,

PranayBolloju commented 1 year ago

Hi @diazandr3s Thanks for the clear explanation. I have tried to train a deepedit model.These are the changes I made in config file.

self.labels = { "NCR": 1, "ED": 2, "ET": 3, "background": 0, }

These are example files that I used for training. Volume BRATS_001.nii.gz Label BRATS_001.nii.gz Total number of images with labels in labels/final directory was 60. And the model was trained for 50 epochs. This is the segmentation. image

Only one the labels was being segmented.

Do you think the model can be improved with more number of epochs and more images ? Or should I change the network being used or network definition?

diazandr3s commented 1 year ago

Hi @PranayBolloju,

I'd suggest the following:

Deepedit uses the whole image for training and inference, while the segmentation model uses patches.

Sorry, I've totally forgotten I've developed a model for BRATS. Please use this radiology app: https://github.com/Project-MONAI/MONAILabel/tree/bratsSegmentation/sample-apps/radiology

There you have the brats algo. Just uncomment these lines and comment the others: https://github.com/Project-MONAI/MONAILabel/blob/bratsSegmentation/sample-apps/radiology/lib/configs/segmentation_brats.py#L33

You could download that radiology app and train the model.

Let me know how that goes,

pieper commented 1 year ago

BTW, some of the brain images in your screenshots are not loaded correctly in 3D Slicer (it's at least flipped front-to-back). It looks like your headers are being incorrectly encoded somewhere in the pipeline.

PranayBolloju commented 1 year ago

Hi @diazandr3s

Many thanks for the clear insights

I was planning to use DicomWeb server as data storage. So I've converted NIFTI images to DICOM using "plastimatch". This affected the modality of the images. Images were 4 channel before converting to DICOM. They became 1 channel after converting to DICOM. So I converted the DICOM images back to NIFTI images (Now 1 channel NIFTI images) to train a brats segmentation model. This is an example volume with only 1 channel. BRATS_001.nii.gz This is an example label. BRATS_001.nii.gz

I have changed the following lines in config file of _segmentationbrats in radiology app.

Uncommented these labels and commented the others.

self.labels = { "edema": 1, "non-enhancing tumor": 2, "enhancing tumour": 3, }

self.number_intensity_ch = 1

When I started training the model this is the error I got.

Traceback (most recent call last): File "/opt/conda/lib/python3.7/runpy.py", line 193, in _run_module_as_main "main", mod_spec) File "/opt/conda/lib/python3.7/runpy.py", line 85, in _run_code exec(code, run_globals) File "/home/jupyter/brats/MONAILabel/monailabel/interfaces/utils/app.py", line 132, in run_main() File "/home/jupyter/brats/MONAILabel/monailabel/interfaces/utils/app.py", line 117, in run_main result = a.train(request) File "/home/jupyter/brats/MONAILabel/monailabel/interfaces/app.py", line 421, in train result = task(request, self.datastore()) File "/home/jupyter/brats/MONAILabel/monailabel/tasks/train/basic_train.py", line 393, in call torch.multiprocessing.spawn(main_worker, nprocs=world_size, args=(world_size, req, datalist, self)) File "/opt/conda/lib/python3.7/site-packages/torch/multiprocessing/spawn.py", line 240, in spawn return start_processes(fn, args, nprocs, join, daemon, start_method='spawn') File "/opt/conda/lib/python3.7/site-packages/torch/multiprocessing/spawn.py", line 198, in start_processes while not context.join(): File "/opt/conda/lib/python3.7/site-packages/torch/multiprocessing/spawn.py", line 160, in join raise ProcessRaisedException(msg, error_index, failed_process.pid) torch.multiprocessing.spawn.ProcessRaisedException: -- Process 0 terminated with the following error: Traceback (most recent call last): File "/opt/conda/lib/python3.7/site-packages/monai/transforms/transform.py", line 91, in apply_transform return _apply_transform(transform, data, unpack_items) File "/opt/conda/lib/python3.7/site-packages/monai/transforms/transform.py", line 55, in _apply_transform return transform(parameters) File "/home/jupyter/brats/MONAILabel/sample-apps/radiology/lib/transforms/transforms_brats.py", line 105, in call unknown_mask = unknown_mask - mask_all_labels File "/opt/conda/lib/python3.7/site-packages/monai/data/meta_tensor.py", line 249, in torch_function ret = super().torch_function(func, types, args, kwargs) File "/opt/conda/lib/python3.7/site-packages/torch/_tensor.py", line 1121, in torch_function ret = func(*args, *kwargs) RuntimeError: The size of tensor a (240) must match the size of tensor b (155) at non-singleton dimension 2 The above exception was the direct cause of the following exception: Traceback (most recent call last): File "/opt/conda/lib/python3.7/site-packages/torch/multiprocessing/spawn.py", line 69, in _wrap fn(i, args) File "/home/jupyter/brats/MONAILabel/monailabel/tasks/train/basic_train.py", line 626, in main_worker task.train(rank, world_size, request, datalist) File "/home/jupyter/brats/MONAILabel/monailabel/tasks/train/basic_train.py", line 454, in train context.evaluator = self._create_evaluator(context) File "/home/jupyter/brats/MONAILabel/monailabel/tasks/train/basic_train.py", line 558, in _create_evaluator val_data_loader=self.val_data_loader(context), File "/home/jupyter/brats/MONAILabel/monailabel/tasks/train/basic_train.py", line 276, in val_data_loader dataset, datalist = self._dataset(context, context.val_datalist) File "/home/jupyter/brats/MONAILabel/monailabel/tasks/train/basic_train.py", line 204, in _dataset if context.dataset_type == "SmartCacheDataset" File "/opt/conda/lib/python3.7/site-packages/monai/data/dataset.py", line 979, in init super().init(data, transform, cache_num, cache_rate, num_init_workers, progress, copy_cache, as_contiguous) File "/opt/conda/lib/python3.7/site-packages/monai/data/dataset.py", line 793, in init__ self.set_data(data) File "/opt/conda/lib/python3.7/site-packages/monai/data/dataset.py", line 1013, in set_data super().set_data(data) File "/opt/conda/lib/python3.7/site-packages/monai/data/dataset.py", line 818, in set_data self._cache = _compute_cache() File "/opt/conda/lib/python3.7/site-packages/monai/data/dataset.py", line 807, in _compute_cache return self._fill_cache() File "/opt/conda/lib/python3.7/site-packages/monai/data/dataset.py", line 831, in _fill_cache desc="Loading dataset", File "/opt/conda/lib/python3.7/site-packages/tqdm/std.py", line 1195, in iter__ for obj in iterable: File "/opt/conda/lib/python3.7/multiprocessing/pool.py", line 748, in next raise value File "/opt/conda/lib/python3.7/multiprocessing/pool.py", line 121, in worker result = (True, func(*args, **kwds)) File "/opt/conda/lib/python3.7/site-packages/monai/data/dataset.py", line 847, in _load_cache_item item = apply_transform(_xform, item) File "/opt/conda/lib/python3.7/site-packages/monai/transforms/transform.py", line 118, in apply_transform raise RuntimeError(f"applying transform {transform}") from e RuntimeError: applying transform <lib.transforms.transforms_brats.AddUnknownLabeld object at 0x7fe18c2a5490> [2022-10-28 07:35:51,318] [52896] [ThreadPoolExecutor-0_0] [INFO] (monailabel.utils.async_tasks.utils:77) - Return code: 1

Then I have commented the following lines to remove AddUnknownLabeld transform in _lib/trainers/segmentationbrats.

def train_pre_transforms(self, context: Context): return [ LoadImaged(keys=("image", "label"), reader="ITKReader"),

AddUnknownLabeld(keys="label", max_labels=self._labels[max(self._labels, key=self._labels.get)]),

NormalizeLabelsInDatasetd(keys="label", label_names=self._labels), # Specially for missing labels EnsureChannelFirstd(keys=("image", "label")),

SaveImaged(keys="label", output_postfix="", output_dir="/home/andres/Downloads", separate_folder=False),

NormalizeIntensityd(keys="image", nonzero=True, channel_wise=True), RandSpatialCropd( keys=["image", "label"], roi_size=[self.spatial_size[0], self.spatial_size[1], self.spatial_size[2]], random_size=False, ),

def val_pre_transforms(self, context: Context): return [ LoadImaged(keys=("image", "label"), reader="ITKReader"),

AddUnknownLabeld(keys="label", max_labels=self._labels[max(self._labels, key=self._labels.get)]),

NormalizeLabelsInDatasetd(keys="label", label_names=self._labels), # Specially for missing labels EnsureChannelFirstd(keys=("image", "label")), NormalizeIntensityd(keys="image", nonzero=True, channel_wise=True), EnsureTyped(keys=("image", "label")), SelectItemsd(keys=("image", "label")), ]

I have trained the model with 300 epochs and 60 volumes with labels. This is the resulted segmentation. image

Looks like all 3 labels are merged into single mask.

I tried segmentation from OHIF viewer too. image

But when I tried to do segmentation from 2D MPR , this is error can be seen image

lassoan commented 1 year ago

Those 4-channel NIFTI images in BRATS is a complete nonsense, because 4 completely independent images are resampled and dumped into a single image file. This misuse is possible in NIFTI (although it breaks several rules of the standard and you lose information about what kind of images you have in the file), but it is not even possible in DICOM. If you want to store images in DICOM then you need to create a separate series from each channel.

ulphypro commented 1 year ago

Hello, members

I have a question.

When I run this command that is monailabel start_server --app monaibundle --studies Task09_Spleen/imagesTr --conf models renalStructures_UNEST_segmentation_v0.2.0 Using PYTHONPATH=C:\Users\KRK\AppData\Local\Programs\Python;, the result shows error that is APP Directory monaibundle NOT Found.

Monaibundle location saved in my pc is C:\Users\KRK\AppData\Local\MONAILabel\MONAILabel\sample-apps\monaibundle\model\brats_mri_segmentation_v0.3.3.

Please let me know correct 'monaibundle folder location.

SachidanandAlle commented 1 year ago

Where you have downloaded the app? There r 4 different apps (sample apps).. check if dir 'monaibundle' exits from where you r running the command

SachidanandAlle commented 1 year ago

Also note.. bundles work good on Linux version.. as sometimes they have bash scripts.. specially training.. however infer you can still on windows using bundle via monailabel

ulphypro commented 1 year ago

@SachidanandAlle Thank you very much.

And I added code that is "ensure_channel_first" : true' in Inference.json and train.json, but I occurred error that is Failed to run inference in MONAI Label Server.

What should I do?

SachidanandAlle commented 1 year ago

start with simple spleen one.. brain mri input has some 4 channels.. and possibly model is trained over 3 or vice versa..

SachidanandAlle commented 1 year ago

also u need to check the error on the server side.. there will be a descriptive log for each of those steps.. that should give fair amount of information.. what's happening.. why it's happening

ulphypro commented 1 year ago

@SachidanandAlle OK. I will try.

ulphypro commented 1 year ago

Dear all members

I’m working auto segmentation with brats_mri_segmentation_v0.2.1 in 3D-Slicer.

When I conduct server start, I used command that is ‘monailabel start_server --app apps/monaibundle --studies datasets/Task01_BrainTumour/imagesTr --conf models brats_mri_segmentation_v0.2.1’.

I added code that is ’ “ensure_channel_first”: true ’ in “preprocessing” part in Inference.json of monaibundle.

But it occurs error that is ‘Failed to run Inference in MONAI Label Server’. Does it have solution?

Train.json also need to edit, but I don’t know where it adds code.

brats_error

please, let me know solution.

Detailed error is as following.

[3D-Slicer error]

This will close current scene. Please make sure you have saved your current work. Are you sure to continue? Current Selection Options Section: infer Current Selection Options Name: brats_mri_segmentation_v0.2.1 Invalidate:: models => brats_mri_segmentation_v0.2.1 => device => [‘cuda’] => <class ‘list’> {‘id’: ‘BRATS_424’, ‘weight’: 1668702326, ‘path’: ‘C:\Users\AA\AppData\Local\MONAILabel\MONAILabel\datasets\Task01_BrainTumour\imagesTr\BRATS_424.nii.gz’, ‘ts’: 1657860597, ‘name’: ‘BRATS_424.nii.gz’} Check if file exists/shared locally: C:\Users\AA\AppData\Local\MONAILabel\MONAILabel\datasets\Task01_BrainTumour\imagesTr\BRATS_424.nii.gz => True Original label not found … Current Selection Options Section: infer Current Selection Options Name: brats_mri_segmentation_v0.2.1 Invalidate:: models => brats_mri_segmentation_v0.2.1 => device => [‘cuda’] => <class ‘list’> Failed to run inference in MONAI Label Server Time consumed by segmentation: 7.4 Time consumed by next_sample: 7.9 [Server error] PS C:\Users\AA\AppData\Local\MONAILabel\MONAILabel> monailabel start_server --app apps/monaibundle --studies datasets/Task01_BrainTumour/imagesTr --conf models brats_mri_segmentation_v0.2.1 Using PYTHONPATH=C:\Users\AA\AppData\Local\MONAILabel\MONAILabel; “” 2022-11-18 01:24:32,250 - USING:: version = False 2022-11-18 01:24:32,250 - USING:: app = C:\Users\AA\AppData\Local\MONAILabel\MONAILabel\apps\monaibundle 2022-11-18 01:24:32,251 - USING:: studies = C:\Users\AA\AppData\Local\MONAILabel\MONAILabel\datasets\Task01_BrainTumour\imagesTr 2022-11-18 01:24:32,251 - USING:: verbose = INFO 2022-11-18 01:24:32,252 - USING:: conf = [[‘models’, ‘brats_mri_segmentation_v0.2.1’]] 2022-11-18 01:24:32,252 - USING:: host = 0.0.0.0 2022-11-18 01:24:32,252 - USING:: port = 8000 2022-11-18 01:24:32,252 - USING:: uvicorn_app = monailabel.app:app 2022-11-18 01:24:32,253 - USING:: ssl_keyfile = None 2022-11-18 01:24:32,253 - USING:: ssl_certfile = None 2022-11-18 01:24:32,253 - USING:: ssl_keyfile_password = None 2022-11-18 01:24:32,254 - USING:: ssl_ca_certs = None 2022-11-18 01:24:32,254 - USING:: workers = None 2022-11-18 01:24:32,254 - USING:: limit_concurrency = None 2022-11-18 01:24:32,254 - USING:: access_log = False 2022-11-18 01:24:32,255 - USING:: log_config = None 2022-11-18 01:24:32,255 - USING:: dryrun = False 2022-11-18 01:24:32,255 - USING:: action = start_server 2022-11-18 01:24:32,256 - ENV SETTINGS:: MONAI_LABEL_API_STR = 2022-11-18 01:24:32,256 - ENV SETTINGS:: MONAI_LABEL_PROJECT_NAME = MONAILabel 2022-11-18 01:24:32,256 - ENV SETTINGS:: MONAI_LABEL_APP_DIR = 2022-11-18 01:24:32,256 - ENV SETTINGS:: MONAI_LABEL_STUDIES = 2022-11-18 01:24:32,257 - ENV SETTINGS:: MONAI_LABEL_AUTH_ENABLE = False 2022-11-18 01:24:32,257 - ENV SETTINGS:: MONAI_LABEL_AUTH_DB = 2022-11-18 01:24:32,257 - ENV SETTINGS:: MONAI_LABEL_APP_CONF = ‘{}’ 2022-11-18 01:24:32,257 - ENV SETTINGS:: MONAI_LABEL_TASKS_TRAIN = True 2022-11-18 01:24:32,258 - ENV SETTINGS:: MONAI_LABEL_TASKS_STRATEGY = True 2022-11-18 01:24:32,258 - ENV SETTINGS:: MONAI_LABEL_TASKS_SCORING = True 2022-11-18 01:24:32,258 - ENV SETTINGS:: MONAI_LABEL_TASKS_BATCH_INFER = True 2022-11-18 01:24:32,258 - ENV SETTINGS:: MONAI_LABEL_DATASTORE = 2022-11-18 01:24:32,259 - ENV SETTINGS:: MONAI_LABEL_DATASTORE_URL = 2022-11-18 01:24:32,259 - ENV SETTINGS:: MONAI_LABEL_DATASTORE_USERNAME = 2022-11-18 01:24:32,259 - ENV SETTINGS:: MONAI_LABEL_DATASTORE_PASSWORD = 2022-11-18 01:24:32,259 - ENV SETTINGS:: MONAI_LABEL_DATASTORE_API_KEY = 2022-11-18 01:24:32,260 - ENV SETTINGS:: MONAI_LABEL_DATASTORE_CACHE_PATH = 2022-11-18 01:24:32,260 - ENV SETTINGS:: MONAI_LABEL_DATASTORE_PROJECT = 2022-11-18 01:24:32,260 - ENV SETTINGS:: MONAI_LABEL_DATASTORE_ASSET_PATH = 2022-11-18 01:24:32,260 - ENV SETTINGS:: MONAI_LABEL_DATASTORE_DSA_ANNOTATION_GROUPS = 2022-11-18 01:24:32,261 - ENV SETTINGS:: MONAI_LABEL_DICOMWEB_USERNAME = 2022-11-18 01:24:32,261 - ENV SETTINGS:: MONAI_LABEL_DICOMWEB_PASSWORD = 2022-11-18 01:24:32,261 - ENV SETTINGS:: MONAI_LABEL_DICOMWEB_CACHE_PATH = 2022-11-18 01:24:32,261 - ENV SETTINGS:: MONAI_LABEL_QIDO_PREFIX = None 2022-11-18 01:24:32,262 - ENV SETTINGS:: MONAI_LABEL_WADO_PREFIX = None 2022-11-18 01:24:32,262 - ENV SETTINGS:: MONAI_LABEL_STOW_PREFIX = None 2022-11-18 01:24:32,262 - ENV SETTINGS:: MONAI_LABEL_DICOMWEB_FETCH_BY_FRAME = False 2022-11-18 01:24:32,262 - ENV SETTINGS:: MONAI_LABEL_DICOMWEB_CONVERT_TO_NIFTI = True 2022-11-18 01:24:32,263 - ENV SETTINGS:: MONAI_LABEL_DICOMWEB_SEARCH_FILTER = ‘{“Modality”: “CT”}’ 2022-11-18 01:24:32,263 - ENV SETTINGS:: MONAI_LABEL_DICOMWEB_CACHE_EXPIRY = 180 2022-11-18 01:24:32,263 - ENV SETTINGS:: MONAI_LABEL_DICOMWEB_PROXY_TIMEOUT = 30.0 2022-11-18 01:24:32,263 - ENV SETTINGS:: MONAI_LABEL_DICOMWEB_READ_TIMEOUT = 5.0 2022-11-18 01:24:32,264 - ENV SETTINGS:: MONAI_LABEL_DATASTORE_AUTO_RELOAD = True 2022-11-18 01:24:32,264 - ENV SETTINGS:: MONAI_LABEL_DATASTORE_READ_ONLY = False 2022-11-18 01:24:32,264 - ENV SETTINGS:: MONAI_LABEL_DATASTORE_FILE_EXT = ‘[“.nii.gz", ".nii”, “.nrrd", ".jpg”, “.png", ".tif”, “.svs", ".xml”]’ 2022-11-18 01:24:32,264 - ENV SETTINGS:: MONAI_LABEL_SERVER_PORT = 8000 2022-11-18 01:24:32,265 - ENV SETTINGS:: MONAI_LABEL_CORS_ORIGINS = ‘’ 2022-11-18 01:24:32,265 - ENV SETTINGS:: MONAI_LABEL_SESSIONS = True 2022-11-18 01:24:32,265 - ENV SETTINGS:: MONAI_LABEL_SESSION_PATH = 2022-11-18 01:24:32,265 - ENV SETTINGS:: MONAI_LABEL_SESSION_EXPIRY = 3600 2022-11-18 01:24:32,266 - ENV SETTINGS:: MONAI_LABEL_INFER_CONCURRENCY = -1 2022-11-18 01:24:32,266 - ENV SETTINGS:: MONAI_LABEL_INFER_TIMEOUT = 600 2022-11-18 01:24:32,266 - ENV SETTINGS:: MONAI_LABEL_AUTO_UPDATE_SCORING = True 2022-11-18 01:24:32,266 - Allow Origins: [‘‘] [2022-11-18 01:24:32,995] [18644] [MainThread] [INFO] (uvicorn.error:75) - Started server process [18644] [2022-11-18 01:24:32,996] [18644] [MainThread] [INFO] (uvicorn.error:45) - Waiting for application startup. [2022-11-18 01:24:32,997] [18644] [MainThread] [INFO] (monailabel.interfaces.utils.app:38) - Initializing App from: C:\Users\AA\AppData\Local\MONAILabel\MONAILabel\apps\monaibundle; studies: C:\Users\AA\AppData\Local\MONAILabel\MONAILabel\datasets\Task01_BrainTumour\imagesTr; conf: {‘models’: ‘brats_mri_segmentation_v0.2.1’} [2022-11-18 01:24:33,039] [18644] [MainThread] [INFO] (monailabel.utils.others.class_utils:37) - Subclass for MONAILabelApp Found: <class ‘main.MyApp’> [2022-11-18 01:24:33,835] [18644] [MainThread] [INFO] (monailabel.utils.others.generic:305) - +++ Adding Bundle from Local: brats_mri_segmentation_v0.2.1 => C:\Users\AA\AppData\Local\MONAILabel\MONAILabel\apps\monaibundle\model\brats_mri_segmentation_v0.2.1 [2022-11-18 01:24:33,836] [18644] [MainThread] [INFO] (monailabel.utils.others.generic:317) - +++ Using Bundle Models: [‘brats_mri_segmentation_v0.2.1’] [2022-11-18 01:24:33,837] [18644] [MainThread] [INFO] (monailabel.interfaces.app:129) - Init Datastore for: C:\Users\AA\AppData\Local\MONAILabel\MONAILabel\datasets\Task01_BrainTumour\imagesTr [2022-11-18 01:24:33,838] [18644] [MainThread] [INFO] (monailabel.datastore.local:129) - Auto Reload: True; Extensions: [’.nii.gz’, ‘.nii’, '.nrrd’, ‘.jpg’, '.png’, ‘.tif’, '.svs’, ‘*.xml’] [2022-11-18 01:24:33,976] [18644] [MainThread] [INFO] (monailabel.datastore.local:576) - Invalidate count: 0 [2022-11-18 01:24:33,976] [18644] [MainThread] [INFO] (monailabel.datastore.local:150) - Start observing external modifications on datastore (AUTO RELOAD) [2022-11-18 01:24:34,037] [18644] [MainThread] [INFO] (main:63) - +++ Adding Inferer:: brats_mri_segmentation_v0.2.1 => <monailabel.tasks.infer.bundle.BundleInferTask object at 0x000001E2F49BF250> [2022-11-18 01:24:34,038] [18644] [MainThread] [INFO] (main:77) - +++ Adding Trainer:: brats_mri_segmentation_v0.2.1 => <monailabel.tasks.train.bundle.BundleTrainTask object at 0x000001E2F4BBBA60> [2022-11-18 01:24:34,039] [18644] [MainThread] [INFO] (main:87) - Active Learning Strategies:: [‘random’, ‘first’] [2022-11-18 01:24:34,039] [18644] [MainThread] [INFO] (monailabel.utils.sessions:51) - Session Path: C:\Users\AA.cache\monailabel\sessions [2022-11-18 01:24:34,039] [18644] [MainThread] [INFO] (monailabel.utils.sessions:52) - Session Expiry (max): 3600 [2022-11-18 01:24:34,040] [18644] [MainThread] [INFO] (monailabel.interfaces.app:468) - App Init - completed [2022-11-18 01:24:34,040] [timeloop] [INFO] Starting Timeloop… [2022-11-18 01:24:34,040] [18644] [MainThread] [INFO] (timeloop:60) - Starting Timeloop… [2022-11-18 01:24:34,041] [timeloop] [INFO] Registered job <function MONAILabelApp.on_init_complete..run_scheduler at 0x000001E2F4BBECA0> [2022-11-18 01:24:34,041] [18644] [MainThread] [INFO] (timeloop:42) - Registered job <function MONAILabelApp.on_init_complete..run_scheduler at 0x000001E2F4BBECA0> [2022-11-18 01:24:34,042] [timeloop] [INFO] Timeloop now started. Jobs will run based on the interval set [2022-11-18 01:24:34,042] [18644] [MainThread] [INFO] (timeloop:63) - Timeloop now started. Jobs will run based on the interval set [2022-11-18 01:24:34,042] [18644] [MainThread] [INFO] (uvicorn.error:59) - Application startup complete. [2022-11-18 01:24:34,043] [18644] [MainThread] [INFO] (uvicorn.error:206) - Uvicorn running on http://0.0.0.0:8000/ (Press CTRL+C to quit) [2022-11-18 01:25:26,734] [18644] [MainThread] [INFO] (monailabel.endpoints.activelearning:43) - Active Learning Request: {‘strategy’: ‘random’, ‘client_id’: ‘user-xyz’} [2022-11-18 01:25:26,786] [18644] [MainThread] [INFO] (monailabel.tasks.activelearning.random:47) - Random: Selected Image: BRATS_424; Weight: 1668702326 [2022-11-18 01:25:26,800] [18644] [MainThread] [INFO] (monailabel.endpoints.activelearning:59) - Next sample: {‘id’: ‘BRATS_424’, ‘weight’: 1668702326, ‘path’: ‘C:\Users\AA\AppData\Local\MONAILabel\MONAILabel\datasets\Task01_BrainTumour\imagesTr\BRATS_424.nii.gz’, ‘ts’: 1657860597, ‘name’: ‘BRATS_424.nii.gz’} [2022-11-18 01:25:27,160] [18644] [MainThread] [INFO] (monailabel.endpoints.infer:160) - Infer Request: {‘model’: ‘brats_mri_segmentation_v0.2.1’, ‘image’: ‘BRATS_424’, ‘device’: ‘cuda’, ‘result_extension’: ‘.nrrd’, ‘result_dtype’: ‘uint8’, ‘client_id’: ‘user-xyz’} [2022-11-18 01:25:27,161] [18644] [MainThread] [INFO] (monailabel.tasks.infer.basic_infer:276) - Infer Request (final): {‘device’: ‘cuda’, ‘model’: ‘brats_mri_segmentation_v0.2.1’, ‘image’: ‘C:\Users\AA\AppData\Local\MONAILabel\MONAILabel\datasets\Task01_BrainTumour\imagesTr\BRATS_424.nii.gz’, ‘result_extension’: ‘.nrrd’, ‘result_dtype’: ‘uint8’, ‘client_id’: ‘user-xyz’, ‘description’: ‘A pre-trained model for volumetric (3D) segmentation of brain tumor subregions from multimodal MRIs based on BraTS 2018 data’} [2022-11-18 01:25:27,164] [18644] [MainThread] [INFO] (monailabel.interfaces.utils.transform:76) - PRE - Run Transform(s) [2022-11-18 01:25:27,165] [18644] [MainThread] [INFO] (monailabel.interfaces.utils.transform:77) - PRE - Input Keys: [‘device’, ‘model’, ‘image’, ‘result_extension’, ‘result_dtype’, ‘client_id’, ‘description’, ‘image_path’] [2022-11-18 01:25:27,774] [18644] [MainThread] [INFO] (monailabel.interfaces.utils.transform:122) - PRE - Transform (LoadImageTensord): Time: 0.6082; image: (4, 240, 240, 155)(torch.float32) [2022-11-18 01:25:28,090] [18644] [MainThread] [INFO] (monailabel.interfaces.utils.transform:122) - PRE - Transform (NormalizeIntensityd): Time: 0.3166; image: (4, 240, 240, 155)(torch.float32) [2022-11-18 01:25:28,091] [18644] [MainThread] [INFO] (monailabel.tasks.infer.basic_infer:464) - Inferer:: cuda => SlidingWindowInferer => {‘roi_size’: [240, 240, 160], ‘sw_batch_size’: 1, ‘overlap’: 0.5, ‘mode’: constant, ‘sigma_scale’: 0.125, ‘padding_mode’: constant, ‘cval’: 0.0, ‘sw_device’: None, ‘device’: None, ‘progress’: False, ‘cpu_thresh’: None, ‘roi_weight_map’: None} [2022-11-18 01:25:28,092] [18644] [MainThread] [INFO] (monailabel.tasks.infer.basic_infer:413) - Infer model path: C:\Users\AA\AppData\Local\MONAILabel\MONAILabel\apps\monaibundle\model\brats_mri_segmentation_v0.2.1\models\model.pt [2022-11-18 01:25:31,082] [18644] [MainThread] [ERROR] (uvicorn.error:369) - Exception in ASGI application Traceback (most recent call last): File “C:\Users\AA\AppData\Local\Programs\Python\Python39\lib\site-packages\uvicorn\protocols\http\h11_impl.py”, line 366, in run_asgi result = await app(self.scope, self.receive, self.send) File “C:\Users\AA\AppData\Local\Programs\Python\Python39\lib\site-packages\uvicorn\middleware\proxy_headers.py”, line 75, in call return await self.app(scope, receive, send) File “C:\Users\AA\AppData\Local\Programs\Python\Python39\lib\site-packages\fastapi\applications.py”, line 269, in call await super().call(scope, receive, send) File “C:\Users\AA\AppData\Local\Programs\Python\Python39\lib\site-packages\starlette\applications.py”, line 124, in call await self.middleware_stack(scope, receive, send) File “C:\Users\AA\AppData\Local\Programs\Python\Python39\lib\site-packages\starlette\middleware\errors.py”, line 184, in call raise exc File “C:\Users\AA\AppData\Local\Programs\Python\Python39\lib\site-packages\starlette\middleware\errors.py”, line 162, in call await self.app(scope, receive, _send) File “C:\Users\AA\AppData\Local\Programs\Python\Python39\lib\site-packages\starlette\middleware\cors.py”, line 84, in call await self.app(scope, receive, send) File “C:\Users\AA\AppData\Local\Programs\Python\Python39\lib\site-packages\starlette\exceptions.py”, line 93, in call raise exc File “C:\Users\AA\AppData\Local\Programs\Python\Python39\lib\site-packages\starlette\exceptions.py”, line 82, in call await self.app(scope, receive, sender) File “C:\Users\AA\AppData\Local\Programs\Python\Python39\lib\site-packages\fastapi\middleware\asyncexitstack.py”, line 21, in call raise e File “C:\Users\AA\AppData\Local\Programs\Python\Python39\lib\site-packages\fastapi\middleware\asyncexitstack.py”, line 18, in call await self.app(scope, receive, send) File “C:\Users\AA\AppData\Local\Programs\Python\Python39\lib\site-packages\starlette\routing.py”, line 670, in call await route.handle(scope, receive, send) File “C:\Users\AA\AppData\Local\Programs\Python\Python39\lib\site-packages\starlette\routing.py”, line 266, in handle await self.app(scope, receive, send) File “C:\Users\AA\AppData\Local\Programs\Python\Python39\lib\site-packages\starlette\routing.py”, line 65, in app response = await func(request) File “C:\Users\AA\AppData\Local\Programs\Python\Python39\lib\site-packages\fastapi\routing.py”, line 227, in app raw_response = await run_endpoint_function( File “C:\Users\AA\AppData\Local\Programs\Python\Python39\lib\site-packages\fastapi\routing.py”, line 160, in run_endpoint_function return await dependant.call(**values) File “C:\Users\AA\AppData\Local\MONAILabel\MONAILabel\monailabel\endpoints\infer.py”, line 179, in api_run_inference return run_inference(background_tasks, model, image, session_id, params, file, label, output) File “C:\Users\AA\AppData\Local\MONAILabel\MONAILabel\monailabel\endpoints\infer.py”, line 161, in run_inference result = instance.infer(request) File “C:\Users\AA\AppData\Local\MONAILabel\MONAILabel\monailabel\interfaces\app.py”, line 300, in infer result_file_name, result_json = task(request) File “C:\Users\AA\AppData\Local\MONAILabel\MONAILabel\monailabel\tasks\infer\basic_infer.py”, line 300, in call data = self.run_inferer(data, device=device) File “C:\Users\AA\AppData\Local\MONAILabel\MONAILabel\monailabel\tasks\infer\basic_infer.py”, line 480, in run_inferer outputs_d = decollate_batch(outputs) File “C:\Users\AA\AppData\Local\Programs\Python\Python39\lib\site-packages\monai\data\utils.py”, line 587, in decollate_batch for t, m in zip(out_list, decollate_batch(batch.meta)): File “C:\Users\AA\AppData\Local\Programs\Python\Python39\lib\site-packages\monai\data\utils.py”, line 599, in decollate_batch b, non_iterable, deco = _non_zipping_check(batch, detach, pad, fill_value) File “C:\Users\AA\AppData\Local\Programs\Python\Python39\lib\site-packages\monai\data\utils.py”, line 500, in _non_zipping_check _deco = {key: decollate_batch(batch_data[key], detach, pad=pad, fill_value=fill_value) for key in batch_data} File “C:\Users\AA\AppData\Local\Programs\Python\Python39\lib\site-packages\monai\data\utils.py”, line 500, in _deco = {key: decollate_batch(batch_data[key], detach, pad=pad, fill_value=fill_value) for key in batch_data} File “C:\Users\AA\AppData\Local\Programs\Python\Python39\lib\site-packages\monai\data\utils.py”, line 599, in decollate_batch b, non_iterable, deco = _non_zipping_check(batch, detach, pad, fill_value) File “C:\Users\AA\AppData\Local\Programs\Python\Python39\lib\site-packages\monai\data\utils.py”, line 502, in _non_zipping_check _deco = [decollate_batch(b, detach, pad=pad, fill_value=fill_value) for b in batch_data] TypeError: iteration over a 0-d array

diazandr3s commented 1 year ago

Dear all members

I’m working auto segmentation with brats_mri_segmentation_v0.2.1 in 3D-Slicer.

When I conduct server start, I used command that is ‘monailabel start_server --app apps/monaibundle --studies datasets/Task01_BrainTumour/imagesTr --conf models brats_mri_segmentation_v0.2.1’.

I added code that is ’ “ensure_channel_first”: true ’ in “preprocessing” part in Inference.json of monaibundle.

But it occurs error that is ‘Failed to run Inference in MONAI Label Server’. Does it have solution?

Train.json also need to edit, but I don’t know where it adds code.

brats_error

please, let me know solution.

Detailed error is as following.

[3D-Slicer error]

This will close current scene. Please make sure you have saved your current work. Are you sure to continue? Current Selection Options Section: infer Current Selection Options Name: brats_mri_segmentation_v0.2.1 Invalidate:: models => brats_mri_segmentation_v0.2.1 => device => [‘cuda’] => <class ‘list’> {‘id’: ‘BRATS_424’, ‘weight’: 1668702326, ‘path’: ‘C:\Users\AA\AppData\Local\MONAILabel\MONAILabel\datasets\Task01_BrainTumour\imagesTr\BRATS_424.nii.gz’, ‘ts’: 1657860597, ‘name’: ‘BRATS_424.nii.gz’} Check if file exists/shared locally: C:\Users\AA\AppData\Local\MONAILabel\MONAILabel\datasets\Task01_BrainTumour\imagesTr\BRATS_424.nii.gz => True Original label not found … Current Selection Options Section: infer Current Selection Options Name: brats_mri_segmentation_v0.2.1 Invalidate:: models => brats_mri_segmentation_v0.2.1 => device => [‘cuda’] => <class ‘list’> Failed to run inference in MONAI Label Server Time consumed by segmentation: 7.4 Time consumed by next_sample: 7.9 [Server error] PS C:\Users\AA\AppData\Local\MONAILabel\MONAILabel> monailabel start_server --app apps/monaibundle --studies datasets/Task01_BrainTumour/imagesTr --conf models brats_mri_segmentation_v0.2.1 Using PYTHONPATH=C:\Users\AA\AppData\Local\MONAILabel\MONAILabel; “” 2022-11-18 01:24:32,250 - USING:: version = False 2022-11-18 01:24:32,250 - USING:: app = C:\Users\AA\AppData\Local\MONAILabel\MONAILabel\apps\monaibundle 2022-11-18 01:24:32,251 - USING:: studies = C:\Users\AA\AppData\Local\MONAILabel\MONAILabel\datasets\Task01_BrainTumour\imagesTr 2022-11-18 01:24:32,251 - USING:: verbose = INFO 2022-11-18 01:24:32,252 - USING:: conf = [[‘models’, ‘brats_mri_segmentation_v0.2.1’]] 2022-11-18 01:24:32,252 - USING:: host = 0.0.0.0 2022-11-18 01:24:32,252 - USING:: port = 8000 2022-11-18 01:24:32,252 - USING:: uvicorn_app = monailabel.app:app 2022-11-18 01:24:32,253 - USING:: ssl_keyfile = None 2022-11-18 01:24:32,253 - USING:: ssl_certfile = None 2022-11-18 01:24:32,253 - USING:: ssl_keyfile_password = None 2022-11-18 01:24:32,254 - USING:: ssl_ca_certs = None 2022-11-18 01:24:32,254 - USING:: workers = None 2022-11-18 01:24:32,254 - USING:: limit_concurrency = None 2022-11-18 01:24:32,254 - USING:: access_log = False 2022-11-18 01:24:32,255 - USING:: log_config = None 2022-11-18 01:24:32,255 - USING:: dryrun = False 2022-11-18 01:24:32,255 - USING:: action = start_server 2022-11-18 01:24:32,256 - ENV SETTINGS:: MONAI_LABEL_API_STR = 2022-11-18 01:24:32,256 - ENV SETTINGS:: MONAI_LABEL_PROJECT_NAME = MONAILabel 2022-11-18 01:24:32,256 - ENV SETTINGS:: MONAI_LABEL_APP_DIR = 2022-11-18 01:24:32,256 - ENV SETTINGS:: MONAI_LABEL_STUDIES = 2022-11-18 01:24:32,257 - ENV SETTINGS:: MONAI_LABEL_AUTH_ENABLE = False 2022-11-18 01:24:32,257 - ENV SETTINGS:: MONAI_LABEL_AUTH_DB = 2022-11-18 01:24:32,257 - ENV SETTINGS:: MONAI_LABEL_APP_CONF = ‘{}’ 2022-11-18 01:24:32,257 - ENV SETTINGS:: MONAI_LABEL_TASKS_TRAIN = True 2022-11-18 01:24:32,258 - ENV SETTINGS:: MONAI_LABEL_TASKS_STRATEGY = True 2022-11-18 01:24:32,258 - ENV SETTINGS:: MONAI_LABEL_TASKS_SCORING = True 2022-11-18 01:24:32,258 - ENV SETTINGS:: MONAI_LABEL_TASKS_BATCH_INFER = True 2022-11-18 01:24:32,258 - ENV SETTINGS:: MONAI_LABEL_DATASTORE = 2022-11-18 01:24:32,259 - ENV SETTINGS:: MONAI_LABEL_DATASTORE_URL = 2022-11-18 01:24:32,259 - ENV SETTINGS:: MONAI_LABEL_DATASTORE_USERNAME = 2022-11-18 01:24:32,259 - ENV SETTINGS:: MONAI_LABEL_DATASTORE_PASSWORD = 2022-11-18 01:24:32,259 - ENV SETTINGS:: MONAI_LABEL_DATASTORE_API_KEY = 2022-11-18 01:24:32,260 - ENV SETTINGS:: MONAI_LABEL_DATASTORE_CACHE_PATH = 2022-11-18 01:24:32,260 - ENV SETTINGS:: MONAI_LABEL_DATASTORE_PROJECT = 2022-11-18 01:24:32,260 - ENV SETTINGS:: MONAI_LABEL_DATASTORE_ASSET_PATH = 2022-11-18 01:24:32,260 - ENV SETTINGS:: MONAI_LABEL_DATASTORE_DSA_ANNOTATION_GROUPS = 2022-11-18 01:24:32,261 - ENV SETTINGS:: MONAI_LABEL_DICOMWEB_USERNAME = 2022-11-18 01:24:32,261 - ENV SETTINGS:: MONAI_LABEL_DICOMWEB_PASSWORD = 2022-11-18 01:24:32,261 - ENV SETTINGS:: MONAI_LABEL_DICOMWEB_CACHE_PATH = 2022-11-18 01:24:32,261 - ENV SETTINGS:: MONAI_LABEL_QIDO_PREFIX = None 2022-11-18 01:24:32,262 - ENV SETTINGS:: MONAI_LABEL_WADO_PREFIX = None 2022-11-18 01:24:32,262 - ENV SETTINGS:: MONAI_LABEL_STOW_PREFIX = None 2022-11-18 01:24:32,262 - ENV SETTINGS:: MONAI_LABEL_DICOMWEB_FETCH_BY_FRAME = False 2022-11-18 01:24:32,262 - ENV SETTINGS:: MONAI_LABEL_DICOMWEB_CONVERT_TO_NIFTI = True 2022-11-18 01:24:32,263 - ENV SETTINGS:: MONAI_LABEL_DICOMWEB_SEARCH_FILTER = ‘{“Modality”: “CT”}’ 2022-11-18 01:24:32,263 - ENV SETTINGS:: MONAI_LABEL_DICOMWEB_CACHE_EXPIRY = 180 2022-11-18 01:24:32,263 - ENV SETTINGS:: MONAI_LABEL_DICOMWEB_PROXY_TIMEOUT = 30.0 2022-11-18 01:24:32,263 - ENV SETTINGS:: MONAI_LABEL_DICOMWEB_READ_TIMEOUT = 5.0 2022-11-18 01:24:32,264 - ENV SETTINGS:: MONAI_LABEL_DATASTORE_AUTO_RELOAD = True 2022-11-18 01:24:32,264 - ENV SETTINGS:: MONAI_LABEL_DATASTORE_READ_ONLY = False 2022-11-18 01:24:32,264 - ENV SETTINGS:: MONAI_LABEL_DATASTORE_FILE_EXT = ‘[“.nii.gz", ".nii”, “.nrrd", ".jpg”, “.png", ".tif”, “.svs", ".xml”]’ 2022-11-18 01:24:32,264 - ENV SETTINGS:: MONAI_LABEL_SERVER_PORT = 8000 2022-11-18 01:24:32,265 - ENV SETTINGS:: MONAI_LABEL_CORS_ORIGINS = ‘’ 2022-11-18 01:24:32,265 - ENV SETTINGS:: MONAI_LABEL_SESSIONS = True 2022-11-18 01:24:32,265 - ENV SETTINGS:: MONAI_LABEL_SESSION_PATH = 2022-11-18 01:24:32,265 - ENV SETTINGS:: MONAI_LABEL_SESSION_EXPIRY = 3600 2022-11-18 01:24:32,266 - ENV SETTINGS:: MONAI_LABEL_INFER_CONCURRENCY = -1 2022-11-18 01:24:32,266 - ENV SETTINGS:: MONAI_LABEL_INFER_TIMEOUT = 600 2022-11-18 01:24:32,266 - ENV SETTINGS:: MONAI_LABEL_AUTO_UPDATE_SCORING = True 2022-11-18 01:24:32,266 - Allow Origins: [‘‘] [2022-11-18 01:24:32,995] [18644] [MainThread] [INFO] (uvicorn.error:75) - Started server process [18644] [2022-11-18 01:24:32,996] [18644] [MainThread] [INFO] (uvicorn.error:45) - Waiting for application startup. [2022-11-18 01:24:32,997] [18644] [MainThread] [INFO] (monailabel.interfaces.utils.app:38) - Initializing App from: C:\Users\AA\AppData\Local\MONAILabel\MONAILabel\apps\monaibundle; studies: C:\Users\AA\AppData\Local\MONAILabel\MONAILabel\datasets\Task01_BrainTumour\imagesTr; conf: {‘models’: ‘brats_mri_segmentation_v0.2.1’} [2022-11-18 01:24:33,039] [18644] [MainThread] [INFO] (monailabel.utils.others.class_utils:37) - Subclass for MONAILabelApp Found: <class ‘main.MyApp’> [2022-11-18 01:24:33,835] [18644] [MainThread] [INFO] (monailabel.utils.others.generic:305) - +++ Adding Bundle from Local: brats_mri_segmentation_v0.2.1 => C:\Users\AA\AppData\Local\MONAILabel\MONAILabel\apps\monaibundle\model\brats_mri_segmentation_v0.2.1 [2022-11-18 01:24:33,836] [18644] [MainThread] [INFO] (monailabel.utils.others.generic:317) - +++ Using Bundle Models: [‘brats_mri_segmentation_v0.2.1’] [2022-11-18 01:24:33,837] [18644] [MainThread] [INFO] (monailabel.interfaces.app:129) - Init Datastore for: C:\Users\AA\AppData\Local\MONAILabel\MONAILabel\datasets\Task01_BrainTumour\imagesTr [2022-11-18 01:24:33,838] [18644] [MainThread] [INFO] (monailabel.datastore.local:129) - Auto Reload: True; Extensions: [’.nii.gz’, ‘.nii’, '.nrrd’, ‘.jpg’, '.png’, ‘.tif’, '.svs’, ‘*.xml’] [2022-11-18 01:24:33,976] [18644] [MainThread] [INFO] (monailabel.datastore.local:576) - Invalidate count: 0 [2022-11-18 01:24:33,976] [18644] [MainThread] [INFO] (monailabel.datastore.local:150) - Start observing external modifications on datastore (AUTO RELOAD) [2022-11-18 01:24:34,037] [18644] [MainThread] [INFO] (main:63) - +++ Adding Inferer:: brats_mri_segmentation_v0.2.1 => <monailabel.tasks.infer.bundle.BundleInferTask object at 0x000001E2F49BF250> [2022-11-18 01:24:34,038] [18644] [MainThread] [INFO] (main:77) - +++ Adding Trainer:: brats_mri_segmentation_v0.2.1 => <monailabel.tasks.train.bundle.BundleTrainTask object at 0x000001E2F4BBBA60> [2022-11-18 01:24:34,039] [18644] [MainThread] [INFO] (main:87) - Active Learning Strategies:: [‘random’, ‘first’] [2022-11-18 01:24:34,039] [18644] [MainThread] [INFO] (monailabel.utils.sessions:51) - Session Path: C:\Users\AA.cache\monailabel\sessions [2022-11-18 01:24:34,039] [18644] [MainThread] [INFO] (monailabel.utils.sessions:52) - Session Expiry (max): 3600 [2022-11-18 01:24:34,040] [18644] [MainThread] [INFO] (monailabel.interfaces.app:468) - App Init - completed [2022-11-18 01:24:34,040] [timeloop] [INFO] Starting Timeloop… [2022-11-18 01:24:34,040] [18644] [MainThread] [INFO] (timeloop:60) - Starting Timeloop… [2022-11-18 01:24:34,041] [timeloop] [INFO] Registered job <function MONAILabelApp.on_init_complete..run_scheduler at 0x000001E2F4BBECA0> [2022-11-18 01:24:34,041] [18644] [MainThread] [INFO] (timeloop:42) - Registered job <function MONAILabelApp.on_init_complete..run_scheduler at 0x000001E2F4BBECA0> [2022-11-18 01:24:34,042] [timeloop] [INFO] Timeloop now started. Jobs will run based on the interval set [2022-11-18 01:24:34,042] [18644] [MainThread] [INFO] (timeloop:63) - Timeloop now started. Jobs will run based on the interval set [2022-11-18 01:24:34,042] [18644] [MainThread] [INFO] (uvicorn.error:59) - Application startup complete. [2022-11-18 01:24:34,043] [18644] [MainThread] [INFO] (uvicorn.error:206) - Uvicorn running on http://0.0.0.0:8000/ (Press CTRL+C to quit) [2022-11-18 01:25:26,734] [18644] [MainThread] [INFO] (monailabel.endpoints.activelearning:43) - Active Learning Request: {‘strategy’: ‘random’, ‘client_id’: ‘user-xyz’} [2022-11-18 01:25:26,786] [18644] [MainThread] [INFO] (monailabel.tasks.activelearning.random:47) - Random: Selected Image: BRATS_424; Weight: 1668702326 [2022-11-18 01:25:26,800] [18644] [MainThread] [INFO] (monailabel.endpoints.activelearning:59) - Next sample: {‘id’: ‘BRATS_424’, ‘weight’: 1668702326, ‘path’: ‘C:\Users\AA\AppData\Local\MONAILabel\MONAILabel\datasets\Task01_BrainTumour\imagesTr\BRATS_424.nii.gz’, ‘ts’: 1657860597, ‘name’: ‘BRATS_424.nii.gz’} [2022-11-18 01:25:27,160] [18644] [MainThread] [INFO] (monailabel.endpoints.infer:160) - Infer Request: {‘model’: ‘brats_mri_segmentation_v0.2.1’, ‘image’: ‘BRATS_424’, ‘device’: ‘cuda’, ‘result_extension’: ‘.nrrd’, ‘result_dtype’: ‘uint8’, ‘client_id’: ‘user-xyz’} [2022-11-18 01:25:27,161] [18644] [MainThread] [INFO] (monailabel.tasks.infer.basic_infer:276) - Infer Request (final): {‘device’: ‘cuda’, ‘model’: ‘brats_mri_segmentation_v0.2.1’, ‘image’: ‘C:\Users\AA\AppData\Local\MONAILabel\MONAILabel\datasets\Task01_BrainTumour\imagesTr\BRATS_424.nii.gz’, ‘result_extension’: ‘.nrrd’, ‘result_dtype’: ‘uint8’, ‘client_id’: ‘user-xyz’, ‘description’: ‘A pre-trained model for volumetric (3D) segmentation of brain tumor subregions from multimodal MRIs based on BraTS 2018 data’} [2022-11-18 01:25:27,164] [18644] [MainThread] [INFO] (monailabel.interfaces.utils.transform:76) - PRE - Run Transform(s) [2022-11-18 01:25:27,165] [18644] [MainThread] [INFO] (monailabel.interfaces.utils.transform:77) - PRE - Input Keys: [‘device’, ‘model’, ‘image’, ‘result_extension’, ‘result_dtype’, ‘client_id’, ‘description’, ‘image_path’] [2022-11-18 01:25:27,774] [18644] [MainThread] [INFO] (monailabel.interfaces.utils.transform:122) - PRE - Transform (LoadImageTensord): Time: 0.6082; image: (4, 240, 240, 155)(torch.float32) [2022-11-18 01:25:28,090] [18644] [MainThread] [INFO] (monailabel.interfaces.utils.transform:122) - PRE - Transform (NormalizeIntensityd): Time: 0.3166; image: (4, 240, 240, 155)(torch.float32) [2022-11-18 01:25:28,091] [18644] [MainThread] [INFO] (monailabel.tasks.infer.basic_infer:464) - Inferer:: cuda => SlidingWindowInferer => {‘roi_size’: [240, 240, 160], ‘sw_batch_size’: 1, ‘overlap’: 0.5, ‘mode’: constant, ‘sigma_scale’: 0.125, ‘padding_mode’: constant, ‘cval’: 0.0, ‘sw_device’: None, ‘device’: None, ‘progress’: False, ‘cpu_thresh’: None, ‘roi_weight_map’: None} [2022-11-18 01:25:28,092] [18644] [MainThread] [INFO] (monailabel.tasks.infer.basic_infer:413) - Infer model path: C:\Users\AA\AppData\Local\MONAILabel\MONAILabel\apps\monaibundle\model\brats_mri_segmentation_v0.2.1\models\model.pt [2022-11-18 01:25:31,082] [18644] [MainThread] [ERROR] (uvicorn.error:369) - Exception in ASGI application Traceback (most recent call last): File “C:\Users\AA\AppData\Local\Programs\Python\Python39\lib\site-packages\uvicorn\protocols\http\h11_impl.py”, line 366, in run_asgi result = await app(self.scope, self.receive, self.send) File “C:\Users\AA\AppData\Local\Programs\Python\Python39\lib\site-packages\uvicorn\middleware\proxy_headers.py”, line 75, in call return await self.app(scope, receive, send) File “C:\Users\AA\AppData\Local\Programs\Python\Python39\lib\site-packages\fastapi\applications.py”, line 269, in call await super().call(scope, receive, send) File “C:\Users\AA\AppData\Local\Programs\Python\Python39\lib\site-packages\starlette\applications.py”, line 124, in call await self.middleware_stack(scope, receive, send) File “C:\Users\AA\AppData\Local\Programs\Python\Python39\lib\site-packages\starlette\middleware\errors.py”, line 184, in call raise exc File “C:\Users\AA\AppData\Local\Programs\Python\Python39\lib\site-packages\starlette\middleware\errors.py”, line 162, in call await self.app(scope, receive, _send) File “C:\Users\AA\AppData\Local\Programs\Python\Python39\lib\site-packages\starlette\middleware\cors.py”, line 84, in call await self.app(scope, receive, send) File “C:\Users\AA\AppData\Local\Programs\Python\Python39\lib\site-packages\starlette\exceptions.py”, line 93, in call raise exc File “C:\Users\AA\AppData\Local\Programs\Python\Python39\lib\site-packages\starlette\exceptions.py”, line 82, in call await self.app(scope, receive, sender) File “C:\Users\AA\AppData\Local\Programs\Python\Python39\lib\site-packages\fastapi\middleware\asyncexitstack.py”, line 21, in call raise e File “C:\Users\AA\AppData\Local\Programs\Python\Python39\lib\site-packages\fastapi\middleware\asyncexitstack.py”, line 18, in call await self.app(scope, receive, send) File “C:\Users\AA\AppData\Local\Programs\Python\Python39\lib\site-packages\starlette\routing.py”, line 670, in call await route.handle(scope, receive, send) File “C:\Users\AA\AppData\Local\Programs\Python\Python39\lib\site-packages\starlette\routing.py”, line 266, in handle await self.app(scope, receive, send) File “C:\Users\AA\AppData\Local\Programs\Python\Python39\lib\site-packages\starlette\routing.py”, line 65, in app response = await func(request) File “C:\Users\AA\AppData\Local\Programs\Python\Python39\lib\site-packages\fastapi\routing.py”, line 227, in app raw_response = await run_endpoint_function( File “C:\Users\AA\AppData\Local\Programs\Python\Python39\lib\site-packages\fastapi\routing.py”, line 160, in run_endpoint_function return await dependant.call(**values) File “C:\Users\AA\AppData\Local\MONAILabel\MONAILabel\monailabel\endpoints\infer.py”, line 179, in api_run_inference return run_inference(background_tasks, model, image, session_id, params, file, label, output) File “C:\Users\AA\AppData\Local\MONAILabel\MONAILabel\monailabel\endpoints\infer.py”, line 161, in run_inference result = instance.infer(request) File “C:\Users\AA\AppData\Local\MONAILabel\MONAILabel\monailabel\interfaces\app.py”, line 300, in infer result_file_name, result_json = task(request) File “C:\Users\AA\AppData\Local\MONAILabel\MONAILabel\monailabel\tasks\infer\basic_infer.py”, line 300, in call data = self.run_inferer(data, device=device) File “C:\Users\AA\AppData\Local\MONAILabel\MONAILabel\monailabel\tasks\infer\basic_infer.py”, line 480, in run_inferer outputs_d = decollate_batch(outputs) File “C:\Users\AA\AppData\Local\Programs\Python\Python39\lib\site-packages\monai\data\utils.py”, line 587, in decollate_batch for t, m in zip(out_list, decollate_batch(batch.meta)): File “C:\Users\AA\AppData\Local\Programs\Python\Python39\lib\site-packages\monai\data\utils.py”, line 599, in decollate_batch b, non_iterable, deco = _non_zipping_check(batch, detach, pad, fill_value) File “C:\Users\AA\AppData\Local\Programs\Python\Python39\lib\site-packages\monai\data\utils.py”, line 500, in _non_zipping_check _deco = {key: decollate_batch(batch_data[key], detach, pad=pad, fill_value=fill_value) for key in batch_data} File “C:\Users\AA\AppData\Local\Programs\Python\Python39\lib\site-packages\monai\data\utils.py”, line 500, in _deco = {key: decollate_batch(batch_data[key], detach, pad=pad, fill_value=fill_value) for key in batch_data} File “C:\Users\AA\AppData\Local\Programs\Python\Python39\lib\site-packages\monai\data\utils.py”, line 599, in decollate_batch b, non_iterable, deco = _non_zipping_check(batch, detach, pad, fill_value) File “C:\Users\AA\AppData\Local\Programs\Python\Python39\lib\site-packages\monai\data\utils.py”, line 502, in _non_zipping_check _deco = [decollate_batch(b, detach, pad=pad, fill_value=fill_value) for b in batch_data] TypeError: iteration over a 0-d array

Hi @ulphypro,

As mentioned before, having 4 modalities in a single nifti file does not make much sense: https://github.com/Project-MONAI/MONAILabel/issues/1051#issuecomment-1295108353

I'd recommend the same as @SachidanandAlle: https://github.com/Project-MONAI/MONAILabel/issues/1051#issuecomment-1314582197

Unfortunately, the monaibundle for brats (brats_mri_segmentation_v0.2.1) needs more work to properly manage the 4 modalities and be used in Slicer. It currently works in MONAI Core only.

Hope that makes sense,

ulphypro commented 1 year ago

Dear @diazandr3s

Thank you for answering my question.

Then, shall I edit in_channel->4 into 1, out_channel ->3 into 1 and shall I run using only one target arugment as following in /configs/inference.json.files? : "transforms" --> "target": "Activationsd", "keys": "pred", "sigmoid": true

Also configs/train.json?

diazandr3s commented 1 year ago

Dear @diazandr3s

Thank you for answering my question.

Then, shall I edit in_channel->4 into 1, out_channel ->3 into 1 and shall I run using only one target arugment as following in /configs/inference.json.files? : "transforms" --> "target": "Activationsd", "keys": "pred", "sigmoid": true

Also configs/train.json?

Hi @ulphypro,

The issue isn't only the code, but it is also the dataset. Each file should have a single modality, not 4 as it currently has.

If you want to use Slicer, you have to separate the 4 modalities or use the original BRATS 2021 dataset - it originally has the 4 modalities separated.

Once you have the separated files, I'd recommend using the segmentation app in MONAI Label radiology app: https://github.com/Project-MONAI/MONAILabel/tree/main/sample-apps/radiology

Same discussion is happening here: https://github.com/Project-MONAI/model-zoo/issues/239

Hope that makes sense.

ulphypro commented 1 year ago

@diazandr3s

Thank you for answering my question.

I downloaded BraTS2021 dataset as you mention.

Should I run using apps/radiology with BraTS2021 dataset?

After starting monailabel server using command 'monailabel start_server --app apps/radiology --studies datasets/Task01_BrainTumour/imagesTr --conf models segmentation' in Window Powershell, I can't run 3D-Slicer.

Because It doesn't support segmentation model associated with brain tumor.

Person that I posted in Project-MONAI/model-zoo#239 is also me.

diazandr3s commented 1 year ago

Hi @ulphypro,

It seems you've downloaded Task01 from the medical segmentation decathlon. That dataset is composed of files that contain all four modalities in a single nifti file. This is precisely the issue: https://github.com/Project-MONAI/MONAILabel/issues/1051#issuecomment-1295108353

I'd recommend you download the original BRATS dataset that has the nifti files separated - please check here https://www.med.upenn.edu/cbica/brats2021/

Then you could start training a model from scratch as recommended here: https://www.youtube.com/watch?v=3HTh2dqZqew&list=PLtoSVSQ2XzyD4lc-lAacFBzOdv5Ou-9IA&index=4

I hope that helps,

ulphypro commented 1 year ago

Dear @diazandr3s

Thank you for answering my question.

I ran monailabel using 3D-Slicer as you refer mention.

I conducted as following.

Please, see process as following.

  1. I downloaded BraTS2021 dataset. (kaggle link : https://www.kaggle.com/datasets/dschettler8845/brats-2021-task1) [included dataset] -BraTS2021_00495.tar ---BraTS2021_00495_flair.nii.gz ---BraTS2021_00495_seg.nii.gz ---BraTS2021_00495_t1.nii.gz ---BraTS2021_00495_t1ce.nii.gz ---BraTS2021_00495_t2.nii.gz

-BraTS2021_00621.tar ---BraTS2021_00621_flair.nii.gz ---BraTS2021_00621_seg.nii.gz ---BraTS2021_00621_t1.nii.gz ---BraTS2021_00621_t1ce.nii.gz ---BraTS2021_00621_t2.nii.gz

-BraTS2021_Training_Data.tar --BraTS2021_00000 ----BraTS2021_00000_flair.nii.gz ----BraTS2021_00000_seg.nii.gz ----BraTS2021_00000_t1.nii.gz ----BraTS2021_00000_t1ce.nii.gz ----BraTS2021_00000_t2.nii.gz --BraTS2021_00001 ----BraTS2021_00001_flair.nii.gz ----BraTS2021_00001_seg.nii.gz ----BraTS2021_00001_t1.nii.gz ----BraTS2021_00001_t1ce.nii.gz ----BraTS2021_00001_t2.nii.gz ... ... ... ... ... ... ... --BraTS2021_01666 ----BraTS2021_01666_flair.nii.gz ----BraTS2021_01666_seg.nii.gz ----BraTS2021_01666_t1.nii.gz ----BraTS2021_01666_t1ce.nii.gz ----BraTS2021_01666_t2.nii.gz

  1. I edited code 'apps/radiology/lib/configs/segmentation_brats.py' as figure bottom. segmentation_brats_code_edit

  2. I ran server start using window powershell.

    • monailabel start_server --app apps/radiology --studies datasets/BraTS2021_Training_Data/BraTS2021_00000 --conf models segmentation_brats
  3. After I had ran monailabel module in 3D-Slicer, I pressed 'refresh' button in monailabel server option. And then I pressed 'next sample' button. -> but it occurs the error in here as following image [2022-11-28 19:42:42,465] [29288] [MainThread] [ERROR] (uvicorn.error:369) - Exception in ASGI application Traceback (most recent call last): File "C:\Users\AA\AppData\Local\Programs\Python\Python39\lib\site-packages\uvicorn\protocols\http\h11_impl.py", line 366, in run_asgi result = await app(self.scope, self.receive, self.send) File "C:\Users\AA\AppData\Local\Programs\Python\Python39\lib\site-packages\uvicorn\middleware\proxy_headers.py", line 75, in call return await self.app(scope, receive, send) File "C:\Users\AA\AppData\Local\Programs\Python\Python39\lib\site-packages\fastapi\applications.py", line 269, in call await super().call(scope, receive, send) File "C:\Users\AA\AppData\Local\Programs\Python\Python39\lib\site-packages\starlette\applications.py", line 124, in call await self.middleware_stack(scope, receive, send) File "C:\Users\AA\AppData\Local\Programs\Python\Python39\lib\site-packages\starlette\middleware\errors.py", line 184, in call raise exc File "C:\Users\AA\AppData\Local\Programs\Python\Python39\lib\site-packages\starlette\middleware\errors.py", line 162, in call await self.app(scope, receive, _send) File "C:\Users\AA\AppData\Local\Programs\Python\Python39\lib\site-packages\starlette\middleware\cors.py", line 84, in call await self.app(scope, receive, send) File "C:\Users\AA\AppData\Local\Programs\Python\Python39\lib\site-packages\starlette\exceptions.py", line 93, in call raise exc File "C:\Users\AA\AppData\Local\Programs\Python\Python39\lib\site-packages\starlette\exceptions.py", line 82, in call await self.app(scope, receive, sender) File "C:\Users\AA\AppData\Local\Programs\Python\Python39\lib\site-packages\fastapi\middleware\asyncexitstack.py", line 21, in call raise e File "C:\Users\AA\AppData\Local\Programs\Python\Python39\lib\site-packages\fastapi\middleware\asyncexitstack.py", line 18, in call await self.app(scope, receive, send) File "C:\Users\AA\AppData\Local\Programs\Python\Python39\lib\site-packages\starlette\routing.py", line 670, in call await route.handle(scope, receive, send) File "C:\Users\AA\AppData\Local\Programs\Python\Python39\lib\site-packages\starlette\routing.py", line 266, in handle await self.app(scope, receive, send) File "C:\Users\AA\AppData\Local\Programs\Python\Python39\lib\site-packages\starlette\routing.py", line 65, in app response = await func(request) File "C:\Users\AA\AppData\Local\Programs\Python\Python39\lib\site-packages\fastapi\routing.py", line 227, in app raw_response = await run_endpoint_function( File "C:\Users\AA\AppData\Local\Programs\Python\Python39\lib\site-packages\fastapi\routing.py", line 160, in run_endpoint_function return await dependant.call(values) File "C:\Users\AA\AppData\Local\Programs\Python\Python39\lib\site-packages\monailabel\endpoints\infer.py", line 179, in api_run_inference return run_inference(background_tasks, model, image, session_id, params, file, label, output) File "C:\Users\AA\AppData\Local\Programs\Python\Python39\lib\site-packages\monailabel\endpoints\infer.py", line 161, in run_inference result = instance.infer(request) File "C:\Users\AA\AppData\Local\Programs\Python\Python39\lib\site-packages\monailabel\interfaces\app.py", line 299, in infer result_file_name, result_json = task(request) File "C:\Users\AA\AppData\Local\Programs\Python\Python39\lib\site-packages\monailabel\tasks\infer\basic_infer.py", line 271, in call data = self.run_inferer(data, device=device) File "C:\Users\AA\AppData\Local\Programs\Python\Python39\lib\site-packages\monailabel\tasks\infer\basic_infer.py", line 436, in run_inferer outputs = inferer(inputs, network) File "C:\Users\AA\AppData\Local\Programs\Python\Python39\lib\site-packages\monai\inferers\inferer.py", line 202, in call return sliding_window_inference( File "C:\Users\AA\AppData\Local\Programs\Python\Python39\lib\site-packages\monai\inferers\utils.py", line 180, in sliding_window_inference seg_prob_out = predictor(window_data, *args, *kwargs) # batched patch segmentation File "C:\Users\AA\AppData\Local\Programs\Python\Python39\lib\site-packages\torch\nn\modules\module.py", line 1130, in _call_impl return forward_call(input, kwargs) File "C:\Users\AA\AppData\Local\Programs\Python\Python39\lib\site-packages\monai\networks\nets\unet.py", line 311, in forward x = self.model(x) File "C:\Users\AA\AppData\Local\Programs\Python\Python39\lib\site-packages\torch\nn\modules\module.py", line 1130, in _call_impl return forward_call(*input, kwargs) File "C:\Users\AA\AppData\Local\Programs\Python\Python39\lib\site-packages\torch\nn\modules\container.py", line 139, in forward input = module(input) File "C:\Users\AA\AppData\Local\Programs\Python\Python39\lib\site-packages\torch\nn\modules\module.py", line 1130, in _call_impl return forward_call(*input, *kwargs) File "C:\Users\AA\AppData\Local\Programs\Python\Python39\lib\site-packages\monai\networks\blocks\convolutions.py", line 314, in forward res: torch.Tensor = self.residual(x) # create the additive residual from x File "C:\Users\AA\AppData\Local\Programs\Python\Python39\lib\site-packages\torch\nn\modules\module.py", line 1130, in _call_impl return forward_call(input, kwargs) File "C:\Users\AA\AppData\Local\Programs\Python\Python39\lib\site-packages\torch\nn\modules\conv.py", line 607, in forward return self._conv_forward(input, self.weight, self.bias) File "C:\Users\AA\AppData\Local\Programs\Python\Python39\lib\site-packages\torch\nn\modules\conv.py", line 602, in _conv_forward return F.conv3d( RuntimeError: Given groups=1, weight of size [16, 4, 3, 3, 3], expected input[1, 1, 128, 128, 128] to have 4 channels, but got 1 channels instead

Question 1. I ran monailabel module referring youtube and using radiology apps and BraTS2021 dataset that you informed me. What should I do to solve the error such as '4.' . Please, let me know solution tip for error of '4.'.

Question 2. How many times do I need to repeat traning before I can run 'Auto Segmentation'?

diazandr3s commented 1 year ago

Hi @ulphypro,

A couple of things here:

Hope this helps,

ulphypro commented 1 year ago

Dear @diazandr3s Thank you for informing solution tip.

Your Answer) Files with this ending _seg.nii.gz are the segmentation ground truth. To run MONAI Label, they shouldn't be in the same folder as the images. ---->I didn't put seg.nii.gz file in dataset/BraTS2021/BraTS2021_folders as figure bottom as referring your mention. image

----->I edited code segmentation_brats.py image

------->I eliminated anything file in apps/radology/model folder image

And then, I ran server start and 3D-Slicer again. The procedure is as following.

1. monailabel start_server --app apps/radiology --studies datasets/BraTS2021_Training_Data/BraTS2021_00002 --conf models segmentation_brats ![image](https://user-images.githubusercontent.com/67679322/205031841-6aa9c3f5-7127-451f-b58c-621250bf492c.png) 2. < Run MONAI Label module using 3D-Slicer> To start MONAI Label server, I pressed 'MONAI Label server' button. Despite of running wihout error, when I had pressed 'next sample' button. All segment including outside of brain detected. ![seg_nii_gz_file_result](https://user-images.githubusercontent.com/67679322/205032376-f0a03aec-18aa-47a1-bf11-1aea7d41385c.png) Please, let me know how to not see green box in 3d view of 3D-Slicer when I pressed 'next sample' button in MONAI Label module option.
diazandr3s commented 1 year ago

Hi @ulphypro,

After getting the next sample, did you press the run auto-segmentation button? For how long have you trained the model?

The green region seems a prediction from a model that hasn't been trained.

BTW, the images you have in the main folder are of the same and of different modalities (FLAIR, T1, T1ce and T2). Are you sure you want to do that? You're training a model to recognise tumours on multiple modalities at the same time. I'd use a single modality FLAIR, T1ce or T2 but not all of them and use more cases/patients

Hope this helps,