Project-MONAI / MONAILabel

MONAI Label is an intelligent open source image labeling and learning tool.
https://docs.monai.io/projects/label
Apache License 2.0
604 stars 196 forks source link

Multiple image modality Support #241

Open finalelement opened 3 years ago

finalelement commented 3 years ago

This Milestone needs further clarification? Objectives? Deliverables?

diazandr3s commented 3 years ago

This issue covers a couple of changes:

one-matrix commented 2 years ago

This is an important function for MRI(T1,T2...),look forward to

diazandr3s commented 2 years ago

Thanks for your comment, @one-matrix. We're creating a smarter heuristic planner that will allow the user to work with multimodal volumes. I hope our first use cases work on the BRATS dataset or ProstateX dataset

Stay tunned! :)

one-matrix commented 2 years ago

Thanks for your comment, @one-matrix. We're creating a smarter heuristic planner that will allow the user to work with multimodal volumes. I hope our first use cases work on the BRATS dataset or ProstateX dataset

Stay tunned! :)

Thanks @diazandr3s ,i have changed code to compatible. like this in interfaces/app.py add images field in request. and combine one nii file from multi dicom files. Hope to contribute some code

        if os.path.exists(image_id):
            request["save_label"] = False
        else:
            request["image"] = datastore.get_image_uri(request["image"])
            if("images" in request):
                niifiles=[datastore.get_image_uri(f) for f in request["images"]]
                nifitConvert=NifitConvert()
                request["image"]=nifitConvert.mutil_nifti2_1file(niifiles)
                print("combine nii {}".format(request["image"]))
diazandr3s commented 2 years ago

Thanks, @one-matrix.

We've now added an argument that allows us to train multimodality models: https://github.com/Project-MONAI/MONAILabel/blob/main/sample-apps/radiology/lib/configs/deepedit.py#L51

We need to further check how to edit labels in multimodality images in Slicer.

Thanks again!

lassoan commented 2 years ago

What do you mean by multi-modality image? Multi-channel image? One option is to load it as a volume sequence. You can then switch between channels using previous/next buttons in the sequence toolbar (and/or you can add playback control widget to the module GUI).

If the images have different geometry (origin, spacing, axis directions) then they can be loaded as separate volumes and the user can choose between them in the master volume selector.

diazandr3s commented 2 years ago

Many thanks, @lassoan. Yes, I mean multi-channel images like in the BRATS dataset. All channels are registered and with the origin, spacing, etc

mbrzus commented 1 year ago

Hello all!

I want to use my segmentation model, which takes three input images. The model process resamples,s and stacks the images on multiple channels during the "pre_transform" step. Then the model creates the segmentation that is saved in the physical space of InputeImage1 using "post_transform."

Using Slicer3D, I want to load the three images (Co-registered but with different physical spaces, similar to what @lassoan said). Then run my model on those images and display the produced mask.

Is this functionality implemented? Could you please point me to the documentation/examples?

Thank you!

diazandr3s commented 1 year ago

Hi @mbrzus,

You could start MONAI Label pointing to a folder containing the first modality modality, and then have a transform that loads the other 2 modalities. Something like this:

class LoadOtherModalitiesd(MapTransform):
    """
    Load all modalities into one Tensor
    "0": First modality
    "1": Second modality,
    "2": Third modality,
    """

    def __init__(self, keys: KeysCollection,  target_spacing, allow_missing_keys: bool = False):
        super().__init__(keys, allow_missing_keys)
        self.target_spacing = target_spacing

    def __call__(self, data):
        d: Dict = dict(data)
        input_path = 'PATH_TO_THE_OTHER_MODALITIES'
        for key in self.key_iterator(d):
            img = copy.deepcopy(d[key])
            name = img.meta['filename_or_obj'].split('/')[-1].split('.')[0]
            all_mods = np.zeros(
                (4, img.array.shape[-3], img.array.shape[-2], img.array.shape[-1]), dtype=np.float32
            )
            all_mods[0, ...] = img.array[0, ...]
            logging.info(f'Size of the images in transform: {all_mods.shape}')
            loader = LoadImage(reader='ITKReader')
            chan_first = EnsureChannelFirst()
            spacer = Spacing(pixdim=self.target_spacing, mode="bilinear")
            res = Resize(spatial_size=(img.array.shape[-3], img.array.shape[-2], img.array.shape[-1]))
            for idx, mod in enumerate(['NAME_SECOND_MODALITY.nii.gz', 'NAME_THIRD_MODALITY.nii.gz']):
                aux = loader(input_path + name + mod)
                logging.info(f'Modality: {mod} is being read')
                aux = chan_first(aux[0])
                spaced_img = spacer(aux)
                logging.info(f'Modality: {mod} is being spaced')
                # This solves the issue of images having slightly different resolution
                resized_img = res(spaced_img)
                all_mods[idx+1, ...] = resized_img[0].array
            d['spatial_size'] = all_mods[0, ...]
            d[key].array = all_mods
        return 

This transform could be after the Spacing transform here: https://github.com/Project-MONAI/MONAILabel/blob/main/sample-apps/radiology/lib/trainers/segmentation.py#L81

Hope this makes sense.

mbrzus commented 1 year ago

@diazandr3s Thank you, this could be helpful

Also, can I load multiple image files to Slicer (multiple modalities per subject ex. T1w and T2w)? And can those loaded images be passed to the inference task to generate a label?

diazandr3s commented 1 year ago

@diazandr3s Thank you, this could be helpful

Also, can I load multiple image files to Slicer (multiple modalities per subject ex. T1w and T2w)? And can those loaded images be passed to the inference task to generate a label?

In this case, the only modality visualized/loaded into Slicer is the first one (the folder used to start the MONAI Label server). The transform I shared allows you to load multimodality cases to the MONAI Label server to train and run inference. Visualizing/loading all modalities in Slicer is another thing.

mbrzus commented 1 year ago

@diazandr3s I understand that.

My current project requires multiple image modalities to label the dataset properly. Hence to manually edit the generated mask, the tracer must be able to switch between modalities in Slicer.

@one-matrix attached this code sniper

request["image"] = datastore.get_image_uri(request["image"])
if("images" in request):
    niifiles=[datastore.get_image_uri(f) for f in request["images"]]

Is it possible to modify the Slicer loading function to load multiple volumes? Could you point me to where the Slicer loading happens in the codebase?

diazandr3s commented 1 year ago

I understand.

Here you have the Slicer module source code: https://github.com/Project-MONAI/MONAILabel/blob/main/plugins/slicer/MONAILabel/MONAILabel.py

Here is the client that makes the API request to the MONAI Label server: https://github.com/Project-MONAI/MONAILabel/blob/main/plugins/slicer/MONAILabel/MONAILabelLib/client.py

And here is the method that specifically fetches the next sample: https://github.com/Project-MONAI/MONAILabel/blob/60fb5462140e568952132aa42b61e3e0c28ef1f3/plugins/slicer/MONAILabel/MONAILabelLib/client.py#L135

On the server side, here is the datastore class that manages a local file archive: https://github.com/Project-MONAI/MONAILabel/blob/main/monailabel/datastore/local.py

These are the methods you may want to update for multiple modalities: https://github.com/Project-MONAI/MONAILabel/blob/main/monailabel/datastore/local.py#L259-L279

I don't think it is difficult to update the code for MONAI Label to send multiple images. However, I don't see an easy way of converting a binary file (the one obtained after calling the MONAI Label server API call) into multiple Nodes in Slicer: https://github.com/Project-MONAI/MONAILabel/blob/main/plugins/slicer/MONAILabel/MONAILabel.py#L1249-L1255

Another way is to leave the code as is and load the other 2 modalities manually to Slicer. Then you can create/modify the label using all modalities.

Hope it makes sense.

mbrzus commented 1 year ago

This is great. Thank you, @diazandr3s!

lassoan commented 1 year ago

You can load 4D images into Slicer by changing slicer.util.loadVolume() to using slicer.util.loadSequence() and you can switch between the sequence items using the sequence browser toolbar (or Ctrl + Shift + Left/Right arrow keys). For example:

For 4D volumes instead of this:

volumeNode = slicer.util.loadVolume(some_path)
volumeNode.SetName(node_name)

You can do this:

volumeSequenceNode = slicer.util.loadSequence(some_path)
volumeSequenceNode.SetName(node_name)
# Get a volume node
browserNode = slicer.modules.sequences.logic().GetFirstBrowserNodeForSequenceNode(volumeSequenceNode)
browserNode.SetOverwriteProxyName(None, True)  # set the proxy node name based on the sequence node name
volumeNode = browserNode.GetProxyNode(volumeSequenceNode)

Sequence loading is only supported for NRRD. We really, really dislike NIFTI, but if there is a very strong need from the community then we may add 4D image support for NIFTI, too.