Open OmarAshkar opened 1 year ago
Hi @OmarAshkar, thanks for your interest here. You are right, the whole slide images are too large, and the memory requirement is very high for directly doing inference on them. So it is not feasible to use the current implementation to infer WSI.
Actually, we have some tools to deal with the memory issue. Such as GridPatchd
and PatchWSIDataset
which don't need to load the whole image but record the location information of the patches. But we don't have a proper inference to blend the results now. We do have some plans for it.
https://github.com/Project-MONAI/MONAI/blob/a77de4adff49037ffb24ea2cae217fac40302181/monai/transforms/spatial/dictionary.py#L1823 https://github.com/Project-MONAI/MONAI/blob/fdd07f36ecb91cfcd491533f4792e1a67a9f89fc/monai/data/wsi_datasets.py#L32
Add @drbeh and @JHancox for more discussions here.
Thanks @KumoLiu ! My only solution for now is to break the wsi into smaller parts, then aggregate back after inference. Do you think there is a better solution?
Hi @OmarAshkar, yes that's what I mentioned, you could use GridPatchd
to extract all the patches sweeping the entire image in a row-major sliding-window manner with possible overlaps. And only need to do post-processing once.
You could also try to use lower-level data for inference. Thanks!
Hi @OmarAshkar,
Thanks for your interest in monai and starting this conversation. As you and @KumoLiu correctly mentioned, this is due to the post-processing part. Although the inference is run on each patches, the post-processing is performed after patches are merged (the same size as the input.
The pipeline that you are using is for hovernet inference with large ROI (which needs to be fit into CPU memory at least). It split the large ROI into patches, run inference and then merge them. Currently, we don't have the hovernet WSI inference implementation in MONAI but it is planned for the next release and is under development.
Meanwhile what you can do is to extract large ROI from the WSI, let's say 2000x2000 or whatever is feasible with your machine, using SlidingPatchWSIDataset
(instead of simple Dataset
used in the tutorial) and then run the hovernet inference. Then you will end up with many outputs of 2000x200 (covering WSI with sliding window) that you need to merged.
Having said that, if you are not comfortable with handling patch outputs by yourself, you may need to wait for the next release or maybe @JHancox has a sample code that can share with you for WSI inference.
Thank you so much @drbeh for your insight. I will try your proposed solution and I am also waiting for the next release.
My question is if I used SlidingPatchWSIDataset, will I be able to keep track of the position of the tiles to put them back in order? I wonder if the output will have row and column index
My question is if I used SlidingPatchWSIDataset, will I be able to keep track of the position of the tiles to put them back in order? I wonder if the output will have row and column index
@OmarAshkar Yes - there is metadata in each dataset batch item that includes the coordinates of the tile.
Hi, @drbeh. Just a quick question. I am using SlidingPatchWSIDataset. I want to know how to use this with SaveImaged to save in a format similar to f"{image_id}_{patch_number}_{patch_col}_{patch_row}.png"
? It seems to me SaveImaged allows only for 1 meta_key. I tried meta_keys = 'name'
, and it exports each patch with a number only.
Also, if you may update me if there is a solution to glue the patches back it would be great.
Thanks!
Hi @omashkartrx,
You should be able to get what you want by providing a custom function for output_name_formatter
argument. By default it uses the following name formatter:
https://github.com/Project-MONAI/MONAI/blob/e375f2a17c098d7b802e5ca64322db6ce874a3aa/monai/data/folder_layout.py#L21-L30
But I didn't get what do you mean by meta_keys
? can you give me reference to it.
@drbeh Thank you. I was alittle confused. It is working fine now. But, if there is a solution to automatically aggregate back the images, would be great!
@omashkartrx the solution for WS patch inference with gluing the patch results is under development and we are expecting to have it in the next release (v1.2). However, for the outputs, we are planning to support Zarr format only in this release. If there is any output format that you would prefer, please let us know so we can plan for it.
@drbeh Haven't used Zarr before (excited to try!) . But it sounds like the exported patches will remain in discrete. The heavy work for me is to re-collect the patches in there is spatial location and export in compressed tif.
@drbeh Haven't used Zarr before (excited to try!) . But it sounds like the exported patches will remain in discrete. The heavy work for me is to re-collect the patches in there is spatial location and export in compressed tif.
Hi @omashkartrx - Zarr stores the patches discretely, but for the developer they behave as a single large array, so there is no need to have to recombine them. You just request the region of interest that you want and it will collate the relevant patches for you - which is why it is so useful :)
Is your feature request related to a problem? Please describe. I am trying to use a trained model in HoverNet to infer many SVS whole slide images. I followed every thing in tutorial, except changing PILReader to WSIReader: like https://github.com/Project-MONAI/tutorials/blob/6be932f821c9ce4e795748dde7574beea8f8ea2c/pathology/hovernet/inference.py#L69
I have tried the backend as either cucim or tifffile. But both cannot do the job quickly. It is more than 4 hours (not finished yet) in a single WSI. This will make the tool unfeasible for use. Also cucim gives "cuda out of memory" on multi-gpu.
Describe the solution you'd like I think the limiting step here has something to do with post-processing after inference. There must be a way multi-thread that using something like dask.
CC: @KumoLiu might help triage the problem?!
Update: All the runs with either cucim or tifffile stopped after about 5 hours with "cuda out of memory" error