Project-MONAI / MONAILabel

MONAI Label is an intelligent open source image labeling and learning tool.
https://docs.monai.io/projects/label
Apache License 2.0
593 stars 189 forks source link

Annotation of 2d images with MONAILabel and 3d slicer #699

Closed j-sieger closed 1 year ago

j-sieger commented 2 years ago

Is there any MONAILabel app for annotating 2d(png, jpeg) images ?

I worked on annotating 3d(nifti) images with MONAILabel and 3d slicer. It is a very nice product and i liked it.

I am exploring the annotation of 2d images with MONAILabel but i could not find the work done on the same. So, Any help about the same would be appreciated. Like links to previous works or guidance for me to achieve the same.

My current work: I have done few changes to the existing code in deepedit/main.py file like in_channles, spatial_dimentsions of UNet model. But i am facing errors related to image headers. looks like my loaded dataset is still getting considered as nifti.

diazandr3s commented 2 years ago

Hi @j-sieger,

Thanks for opening this issue, this is a great question.

My first suggestion for this task is to work with the Segmentation App instead of the DeepEdit App. I think it could be easier.

As a side project, I've started an App for 2D segmentation based on the Segmentation App. You may find it helpful.

It is in the apps branch

I've added 2D image formats in MONAI Label so it detects my datastore. Here is the list file formats.

I've changed several changes in the transforms for training and inference. But I haven't investigated much how 3D Slicer manages 2D images.

It'll be nice we get your experience on this 2D App. Please let us know if you face any errors ... We could get input from the 3D Slicer community as well.

lassoan commented 2 years ago

But I haven't investigated much how 3D Slicer manages 2D images.

3D Slicer does not distinguish between 2D and 3D images, but it simply treats a 2D image as single-slice 3D image.

diazandr3s commented 2 years ago

But I haven't investigated much how 3D Slicer manages 2D images.

3D Slicer does not distinguish between 2D and 3D images, but it simply treats a 2D image as single-slice 3D image.

Many thanks for your input, @lassoan. For 2D applications, would you recommend using the same file formats for 3D applications (i.e. NRRD, NIfTI, DICOM)? or .png .JPG should work as well?

Thanks again!

lassoan commented 2 years ago

If in the 2D image you have origin, spacing, orientation information or the image is large (>4GB), or use bit depth >8 then I would use nrrd/nifti/DICOM, because these 3D formats can store all these data in a standard way, while consumer image file formats (png, jpg, etc.) struggle.

However, if the user just works with uncalibrated RGB images (for example photos) then it would make sense to allow MONAILabel to use that format and not require the user to convert to/from nifti.

j-sieger commented 2 years ago

@diazandr3s Thanks for your inputs. You have shared me the great initiation done by your team. I am working on it. I will let you know my experience on working with 2d images along with 3d slicer.

diazandr3s commented 2 years ago

@diazandr3s Thanks for your inputs. You have shared me the great initiation done by your team. I am working on it. I will let you know my experience on working with 2d images along with 3d slicer.

Thanks, @lassoan! @j-sieger please keep us posted.

j-sieger commented 2 years ago

@diazandr3s

I am facing below error while running the inference. Please help me to resolve

My understanding of the error: Here roi_size=2 because of [160,160]. And my input size looks like 3 may be due to RGB image. So it is causing the mismatch in dimension. Please let me know your inputs to solve this.

''' [2022-03-23 12:54:35,020] [15688] [MainThread] [INFO] (monailabel.interfaces.tasks.infer:391) - Inferer:: SlidingWindowInferer => {'roi_size': [160, 160], 'sw_batch_size': 1, 'overlap': 0.25, 'mode': <BlendMode.CONSTANT: 'constant'>, 'sigma_scale': 0.125, 'padding_mode': <PytorchPadMode.CONSTANT: 'constant'>, 'cval': 0.0, 'sw_device': None, 'device': None, 'progress': False} [2022-03-23 12:54:35,024] [15688] [MainThread] [INFO] (monailabel.interfaces.tasks.infer:339) - Infer model path: None [2022-03-23 12:54:35,040] [15688] [MainThread] [ERROR] (uvicorn.error:369) - Exception in ASGI application Traceback (most recent call last): File "/home/ec2-user/.local/lib/python3.9/site-packages/uvicorn/protocols/http/h11_impl.py", line 366, in run_asgi result = await app(self.scope, self.receive, self.send) File "/home/ec2-user/.local/lib/python3.9/site-packages/uvicorn/middleware/proxy_headers.py", line 75, in call return await self.app(scope, receive, send) File "/home/ec2-user/.local/lib/python3.9/site-packages/fastapi/applications.py", line 212, in call await super().call(scope, receive, send) File "/home/ec2-user/.local/lib/python3.9/site-packages/starlette/applications.py", line 112, in call await self.middleware_stack(scope, receive, send) File "/home/ec2-user/.local/lib/python3.9/site-packages/starlette/middleware/errors.py", line 181, in call raise exc File "/home/ec2-user/.local/lib/python3.9/site-packages/starlette/middleware/errors.py", line 159, in call await self.app(scope, receive, _send) File "/home/ec2-user/.local/lib/python3.9/site-packages/starlette/middleware/cors.py", line 84, in call await self.app(scope, receive, send) File "/home/ec2-user/.local/lib/python3.9/site-packages/starlette/exceptions.py", line 82, in call raise exc File "/home/ec2-user/.local/lib/python3.9/site-packages/starlette/exceptions.py", line 71, in call await self.app(scope, receive, sender) File "/home/ec2-user/.local/lib/python3.9/site-packages/starlette/routing.py", line 656, in call await route.handle(scope, receive, send) File "/home/ec2-user/.local/lib/python3.9/site-packages/starlette/routing.py", line 259, in handle await self.app(scope, receive, send) File "/home/ec2-user/.local/lib/python3.9/site-packages/starlette/routing.py", line 61, in app response = await func(request) File "/home/ec2-user/.local/lib/python3.9/site-packages/fastapi/routing.py", line 226, in app raw_response = await run_endpoint_function( File "/home/ec2-user/.local/lib/python3.9/site-packages/fastapi/routing.py", line 159, in run_endpoint_function return await dependant.call(**values) File "/home/ec2-user/.local/lib/python3.9/site-packages/monailabel/endpoints/infer.py", line 177, in api_run_inference return run_inference(background_tasks, model, image, session_id, params, file, label, output) File "/home/ec2-user/.local/lib/python3.9/site-packages/monailabel/endpoints/infer.py", line 160, in run_inference result = instance.infer(request) File "/home/ec2-user/.local/lib/python3.9/site-packages/monailabel/interfaces/app.py", line 253, in infer result_file_name, result_json = task(request) File "/home/ec2-user/.local/lib/python3.9/site-packages/monailabel/interfaces/tasks/infer.py", line 266, in call data = self.run_inferer(data, device=device) File "/home/ec2-user/.local/lib/python3.9/site-packages/monailabel/interfaces/tasks/infer.py", line 406, in run_inferer outputs = inferer(inputs, network) File "/home/ec2-user/.local/lib/python3.9/site-packages/monai/inferers/inferer.py", line 168, in call return sliding_window_inference( File "/home/ec2-user/.local/lib/python3.9/site-packages/monai/inferers/utils.py", line 102, in sliding_window_inference roi_size = fall_back_tuple(roi_size, imagesize) File "/home/ec2-user/.local/lib/python3.9/site-packages/monai/utils/misc.py", line 187, in fall_back_tuple user = ensure_tuple_rep(user_provided, ndim) File "/home/ec2-user/.local/lib/python3.9/site-packages/monai/utils/misc.py", line 144, in ensure_tuple_rep raise ValueError(f"Sequence must have length {dim}, got {len(tup)}.") ValueError: Sequence must have length 3, got 2.

'''

lassoan commented 2 years ago

When you represent your data as a numpy array then you may not know what each axis mean. For example, a 3D array may mean 2D spatial + color or 3D spatial. To avoid this ambiguity, a complex but robust and flexible solution would be to store axis metadata (which would tell what each axis of the numpy array means); a simpler solution could be to use a fixed axis order, such as volume, slice, row, column, component (shape of a color + 2D spatial image would be then (1, j, i, 3), while shape of a 3D spatial image would be (k, j, i)).

diazandr3s commented 2 years ago

Thanks for your input, @lassoan. This is helpful.

Storing axis metadata will definitely help to generalize 2D applications. For this, we can use the NRRD file format.

@j-sieger, I've created the 2D segmentation for lung segmentation using grayscale images, specifically X-ray images. Like this example https://github.com/imlab-uiip/lung-segmentation-2d

The app I mentioned above isn't fully functional yet. As there are not many users of 2D segmentation applications, I stopped working on it.

However, the MONAI Label server can support this without major problems. I think we should first work on defining a robust way of storing 2D images (i.e. multispectral, RGB, or grayscale) as @lassoan suggested.

In which application are you working on? Are they natural images?

j-sieger commented 2 years ago

Thanks @lassoan , @diazandr3s for your suggestions.

I don't have any solid requirement about working with RGB or grayscale. When I got this idea about trying 3d slicer with MONAILabel for 2d segmentation i started working on it as in future I need to use 3d slicer not only for segmenting 3d images but also for 2d images.

I have chosen RGB images as it is more common form of 2d images. currently i am using the sample dataset from kaggle(https://www.kaggle.com/c/data-science-bowl-2018/data). Later I need to work on more complex dataset(actual requirement) once I create an app which works fine on this sample dataset.

It looks segmentaion2D app is designed for gray scale images. I am going through the codes of both the apps (lung-segmentation-2d, segmentation2d) to understand and make the necessary changes to make it work.

diazandr3s commented 2 years ago

Thanks for clarifying, @j-sieger.

As you may know, there are several types of 2D images: natural images, histopathological images, fundus images, X-ray images, satellite images, multispectral images, etc

Each application has its own challenge.

The dataset you've mentioned seems to be for a pathology application (nuclei segmentation). For this, there is a pathology application that we're working on.

As there are viewers/clients specialized in these applications, we've created the MONAI Label plugin for them: QuPath and Digital Slide Archive

The 2D Segmentation App I've created was initially thought to work on grayscale images such as X-ray images ... but we should first address the data management as @lassoan suggested ...

Hope this helps.

j-sieger commented 2 years ago

@diazandr3s Thanks for your clear information.

Actually, today I was going through your newly released pathology application as well. After that even i realized the same points as you mentioned. I will work as per the pathology application thread and check the results.

j-sieger commented 2 years ago

@diazandr3s For pathology images we can use the new pathology application released.

What about segmenting OCT images (https://images.app.goo.gl/61ZTFXZzWZ5u7Gvm9). If i want to segment 9 different classes like you see in the given image link. Which application i can use ?

Is it good to go with segmentation 2d app along with 3d slicer ? or QuPath with pathology ?

Please suggest me any other combinations also to achieve the segmentation on OCT images. based on your knowledge and experience.

diazandr3s commented 2 years ago

This is a great question @j-sieger.

My suggestion is to investigate what's the most popular viewer to segment OCT images. We could check whether we can implement a MONAI Label plugin there.

From my little experience with OCT images, they are 3D volumes that can also be analyzed as 2D slices, right?

If that's the case, you could easily start using MONAI Label to segment the 3D volumes.

@lassoan, do you have experience with OCT image applications on 3D Slicer?

SachidanandAlle commented 2 years ago

actually it should be very simple.. for deepgrow 2D model, we train on 2D images.. basically take z dimension and get all the 2D slices and train a model..

For pathology it's all 2D.. may be WSI inference is the extra thing.. which we don't have to worry in case of smaller 2D images.. It's all how you craft your pre-transforms to created the required input to train/infer a model..

Once you have the model ready as part of monailabel server app.. then you can try using direct apis in http://127.0.0.1:8000/ to run basic infer/train actions.. and later you can see something similar working in any clients which can support those images/label masks for rendering

j-sieger commented 2 years ago

@diazandr3s : I am working on Segmenation_2d app. I have labelled few 2d images from my Dataset. Labels got saved in *.nii.gz format.

Facing the below error in transformation when i am trying to train the model.

Exception in thread Thread-18:
Traceback (most recent call last):
  File "/home/ec2-user/.local/lib/python3.9/site-packages/monai/transforms/transform.py", line 89, in apply_transform
    return _apply_transform(transform, data, unpack_items)
  File "/home/ec2-user/.local/lib/python3.9/site-packages/monai/transforms/transform.py", line 53, in _apply_transform
    return transform(parameters)
  File "/home/ec2-user/.local/lib/python3.9/site-packages/monai/transforms/croppad/dictionary.py", line 1171, in __call__
    self.randomize(label, fg_indices, bg_indices, image)
  File "/home/ec2-user/.local/lib/python3.9/site-packages/monai/transforms/croppad/dictionary.py", line 1147, in randomize
    self.spatial_size = fall_back_tuple(self.spatial_size, default=label.shape[1:])
  File "/home/ec2-user/.local/lib/python3.9/site-packages/monai/utils/misc.py", line 190, in fall_back_tuple
    user = ensure_tuple_rep(user_provided, ndim)
  File "/home/ec2-user/.local/lib/python3.9/site-packages/monai/utils/misc.py", line 144, in ensure_tuple_rep
    raise ValueError(f"Sequence must have length {dim}, got {len(tup)}.")
ValueError: Sequence must have length 3, got 2.
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
  File "/opt/conda/lib/python3.9/threading.py", line 954, in _bootstrap_inner
    self.run()
  File "/opt/conda/lib/python3.9/threading.py", line 892, in run
    self._target(*self._args, **self._kwargs)
  File "/home/ec2-user/.local/lib/python3.9/site-packages/monai/data/thread_buffer.py", line 45, in enqueue_values
    for src_val in self.src:
  File "/home/ec2-user/.local/lib/python3.9/site-packages/torch/utils/data/dataloader.py", line 521, in __next__
    data = self._next_data()
  File "/home/ec2-user/.local/lib/python3.9/site-packages/torch/utils/data/dataloader.py", line 561, in _next_data
    data = self._dataset_fetcher.fetch(index)  # may raise StopIteration
  File "/home/ec2-user/.local/lib/python3.9/site-packages/torch/utils/data/_utils/fetch.py", line 49, in fetch
    data = [self.dataset[idx] for idx in possibly_batched_index]
  File "/home/ec2-user/.local/lib/python3.9/site-packages/torch/utils/data/_utils/fetch.py", line 49, in <listcomp>
    data = [self.dataset[idx] for idx in possibly_batched_index]
  File "/home/ec2-user/.local/lib/python3.9/site-packages/monai/data/dataset.py", line 97, in __getitem__
 return self._transform(index)
  File "/home/ec2-user/.local/lib/python3.9/site-packages/monai/data/dataset.py", line 806, in _transform
    data = apply_transform(_transform, data)
  File "/home/ec2-user/.local/lib/python3.9/site-packages/monai/transforms/transform.py", line 113, in apply_transform
    raise RuntimeError(f"applying transform {transform}") from e
RuntimeError: applying transform <monai.transforms.croppad.dictionary.RandCropByPosNegLabeld object at 0x7f7582dde940>

Few parameters from the randomize function to fall_back_tuple as follows. I printed them using print statements in misc.py file.

Jani_misc: default: torch.Size([360, 360, 1])
Jani_misc: ndim: 3
Jani_misc: user_provided: (96, 96)
diazandr3s commented 2 years ago

Thanks, @j-sieger. It seems the error comes from using RandCropByPosNegLabeld transform (https://github.com/Project-MONAI/MONAI/blob/dev/monai/transforms/croppad/dictionary.py#L1042) for 2D applications. Can you please comment that line out and retry? Which dataset are you using for this? Is it publicly available? If I can access the dataset I might help to debug this.

lassoan commented 2 years ago

do you have experience with OCT image applications on 3D Slicer?

Slicer supports OCT images, including:

In general, OCT images are quite similar to ultrasound images, which are used quite extensively in Slicer. So, while Slicer does not have features specifically developed/optimized for OCT (other than OCT importers), the existing general-purpose tools should work well for OCTs.

j-sieger commented 2 years ago

@diazandr3s :

Currently i am working on Nuclei dataset from kaggle(https://www.kaggle.com/competitions/data-science-bowl-2018/overview). once the model works fine on this dataset, i will work on OCT 2d images.

SachidanandAlle commented 2 years ago

if you are looking for Nuclei kind.. then better try QuPath + Pathology example..

https://github.com/Project-MONAI/MONAILabel/tree/main/plugins/qupath https://github.com/Project-MONAI/MONAILabel/tree/main/sample-apps/pathology

SachidanandAlle commented 1 year ago

I see we have couple of examples to refer for 2D segmentation across multiple viewers. Closing the issue. Feel free to open a new issue for any specific support/help.