BMCV / galaxy-image-analysis

Galaxy tools for image analysis
MIT License
14 stars 17 forks source link

Tools required for conversion of KNIME image analysis workflows to Galaxy #105

Closed rmassei closed 4 months ago

rmassei commented 8 months ago

I made a list of potential tools which could be implemented for image processing and analysis. The list is derived from a comparison of the tools present in Galaxy with some workflow developed by the KNIME Image Processing Extension. Overall, there is a good presence of tools and it possible to reproduce most of the workflow with few modifications.

https://galaxyproject.org/news/2024-03-08-hackathon-imaging/#workflow-translation-from-knime-to-galaxy

Binary

Process

Labelling

Quantification

Visualization

kostrykin commented 8 months ago

Thanks for your suggestions, @rmassei! This looks very doable and will sure be very useful.

kostrykin commented 7 months ago

The tool added in #106 should cover all use cases you described under Binary @rmassei. The tool added in #107 covers the functionality of the https://imagej.net/ij/plugins/inverter.html plugin which you listed under Inverter.

The tool added in #108 performs a Voronoi tessellation, which, judging by the example you gave, is what you mean by Voronoi segmentation. However, I do not see what an input table should be used for. Can you elaborate on that?

The ImageJ plugin suite OrientationJ is a series of multiple plugins. Can you maybe narrow a bit, which functionality exactly is required? Then, it will be easier to think whether writing a Python tool which mimics that functionality is feasible, or whether it might be wiser to aim for a wrapper of the original ImageJ plugin.

rmassei commented 7 months ago

Sorry @kostrykin, I just copy-pasted the description from the KNIME node and the table refers to the image input :smile:

Voronoi looks good, there is another option in the node to input an image with the "seed regions" before the applying the voronoi segmentation but I do not know if make sense to implement this in Galaxy or can be achieve with other steps:

image

Regarding the OrientationJ, I have the experience using the orientation parameter which report the orientation propriety of the image:

Screenshot from 2024-03-11 11-21-17

Orientation can be then used to straight the image (in this case a -9.101 degree rotation) :

Screenshot from 2024-03-11 11-22-00

I found a bit of theoretical background here: https://bigwww.epfl.ch/demo/orientation/theoretical-background.pdf https://bigwww.epfl.ch/demo/orientation/

kostrykin commented 7 months ago

Thanks @rmassei!

So what we now have in voronoi_tessellation is basically your Voronoi Segmentation with seeds, yet without the "Image to work on". I can't make much sense of what "Image to work on" could mean in that context, because Voronoi actually is purely about the geometric relation of the seeds. Can you maybe provide some more info?

Moreover, when looking at the example images, I'd say that it could be that there is a threshold for the Voronoi Tessellation, like a maximum distance (or thresholding the distance transform), but this is just a rough suspicion and nothing I've ever heard of. Can you maybe confirm that, or do you have any further info?

Regarding OrientationJ, thanks for the infos, I will look into this and see how feasible it is.

rmassei commented 7 months ago

@kostrykin, you are completely right and I overseen it, sorry for this. There is a background threshold that need to be set for the voronoi segmentation. Pixel below that value are just considered to not be part of any cell. Moreover, it is also possible to add a "fill holes" post-processing step.

kostrykin commented 7 months ago

Thanks for the clarification! I think we can (almost) imitate this behavior already with the tools currently on-board (edit: since #109 it should be exactly imitable).

The first step would be to compute the Voronoi tessellation from your labels. For this you can use the new Voronoi Tessellation tool. The next step would be to compute the 0/1-mask of the foreground (only pixels within this mask will be considered part of a cell). In a third step, we would use the new Process images using arithmetic expressions tool to multiply the Voronoi tessellation with the foreground mask. The result should be what you have been looking for.

For the computation of the foreground mask using intensity thresholding, we have the Threshold Image tool. However, this tool currently only supports thresholding using automatically determined thresholds (e.g., Otsu). It is a no-brainer to extend that tool to also allow custom threshold values. However, note that this tool labels the foreground with 255, not with 1. So when it comes to the image arithmetics described above, you would also need to divide the mask by a factor of 255 (because a 0/1 mask is what you want).

Let me know if you agree/disagree! @rmassei

kostrykin commented 7 months ago

Regarding the OrientationJ, I have the experience using the orientation parameter which report the orientation propriety of the image

@rmassei We have added OerientationPy, which is the successor of OrientationJ from the same authors, in #110. I think everything is done now?

rmassei commented 7 months ago

Hi @kostrykin thanks a lot for this! I am actually not able to reproduce the voronoi segmentation by following the aforementioned steps. I am sure I am overlooking some passages or making a mistake somewhere, maybe you have a solution: https://usegalaxy.eu/u/rmassei88/h/voronoitest-1 Was OerientationPy already added to the tools?

kostrykin commented 7 months ago

I am actually not able to reproduce the voronoi segmentation by following the aforementioned steps. I am sure I am overlooking some passages or making a mistake somewhere, maybe you have a solution: https://usegalaxy.eu/u/rmassei88/h/voronoitest-1

I think the problem is that you piped the filtered image into Convert binary image into label map, not a binary image. Besides, you used the factor 225 in your expression input1 * (input2)/225 for the Voronoi tessellation tool. This is supposed to be 255.

Was OerientationPy already added to the tools?

109 and #110 will be available on Galaxy EU presumably by Monday (maybe earlier if we're lucky). I will then create a small example of Voronoi segmentation for you :)

bgruening commented 7 months ago

https://github.com/BMCV/galaxy-image-analysis/pull/109 and https://github.com/BMCV/galaxy-image-analysis/pull/110 will be available on Galaxy EU presumably by Monday (maybe earlier if we're lucky). I will then create a small example of Voronoi segmentation for you :)

Both updates should be available now.

kostrykin commented 7 months ago

@bgruening Hm Galaxy is complaining that the Python script from #109 is missing:

python: can't open file '/opt/galaxy/shed_tools/toolshed.g2.bx.psu.edu/repos/imgteam/2d_auto_threshold/7db4fc31dbee/2d_auto_threshold/auto_threshold.py': [Errno 2] No such file or directory

bgruening commented 7 months ago

that happens if I do manual installation ;)

I forgot a sync. This will work in 5min.

rmassei commented 7 months ago

I am actually not able to reproduce the voronoi segmentation by following the aforementioned steps. I am sure I am overlooking some passages or making a mistake somewhere, maybe you have a solution: https://usegalaxy.eu/u/rmassei88/h/voronoitest-1

I think the problem is that you piped the filtered image into Convert binary image into label map, not a binary image. Besides, you used the factor 225 in your expression input1 * (input2)/225 for the Voronoi tessellation tool. This is supposed to be 255.

Was OerientationPy already added to the tools?

109 and #110 will be available on Galaxy EU presumably by Monday (maybe earlier if we're lucky). I will then create a small example of Voronoi segmentation for you :)

Hi @kostrykin, I tried to change the workflow according to your suggestion but still cannot achieve a good segmentation, the output is basically the same of the threshold image (step 7).

kostrykin commented 7 months ago

I've built an example of a Voronoi segmentation workflow based on your explanations: https://usegalaxy.eu/u/e26918e6b1264c81874871c01e988195/w/voronoi-segmentation

Required inputs:

Bildschirmfoto 2024-03-13 um 12 06 54

Indeed, to achieve a good segmentation performance, the choice of the seeds is crucial.

And here is an example invocation for your input image, for which I have created the seeds by hand: https://usegalaxy.eu/u/e26918e6b1264c81874871c01e988195/h/voronoi-segmentation

Bildschirmfoto 2024-03-13 um 12 36 49

I think this looks very much like what you had posted above.

Another pitfall I have noticed while experimenting with this, is that our Filter 2D image tool changes the range of image intensity values. This is probably because the data type is changed (from uint8 to something floatish, which actually makes sense). We will have to look into this at some point to see whether this can be made more user-friendly and/or transparent.

rmassei commented 7 months ago

Hi! I tried with the original seeds and results looks pretty neat: image Small differences are just related to some threshold parameters which need to be tuned... but, overall, seems that it is possible to reproduce the same behavior!

kostrykin commented 7 months ago

Glad to hear that! What will be the next steps?

I hope to be able to improve the coloring of the Colorize labels tool timely, so that there will be fewer cases of hard-to-keep-apart-colors like adjacent red-and-pink or green-and-teal.

rmassei commented 7 months ago

I am going now to try to put together the whole workflow and check a way to quantitative comparing the two outputs! Keep you posted

rmassei commented 6 months ago

Hi @kostrykin! back here after a month :) I finally managed to rebuild the KNIME workflows in Galaxy white some modularization: 1) Generic nuclei segmentation plus feature extraction 2) Voronoi Segmentation plus feature extraction 3) Full combined workflow

Additionally, I created a workflow to test different manual threshold levels in batch: 4) Testing Manual Threholds This was particularly useful to test different level of thresholds before performing the Voronoi

While 1 is working pretty fine, I am still having some problems with 2.

It would be nice to have such output if possible: Screenshot from 2024-04-23 12-28-15

kostrykin commented 6 months ago

Thanks @rmassei!

I'm not sure whether I can follow. I understand that you are having issues with Workflow 2 (Voronoi segmentation plus feature extraction). Please clarify:

  1. The first image that you posted, is this the result of the "Overlay images" step in Workflow 2?
  2. Your first issue is that you want it to look like the second image you posted?
  3. Your second issue is that you need to extract image features from the segmented image regions?
  4. What do you mean by "plot them on the original image"? Is this somehow related to my first question?
rmassei commented 6 months ago

sorry, although I was not clear in the explanation, you got all my issues :D

1 and 4) Yes, step 14. I guess the "connected component" is not really the best option 2) Yes, it would be nice to be able to plot the colorize image on the original one but I am failing in performing this step

  1. Exactly
kostrykin commented 6 months ago

Ok, I now have a clearer picture, but can you please explain:

Issue 1: To me, the expected image (the lower one) looks like a blending of the colorized Voronoi segmentation and the original image. Do you have any extra info on how the blending of the Voronoi segmentation and the original image works? I think it looks like a linear combination of the two, but I'm not entirely sure.

Issue 3: Am I right that you intend to plot the extracted image features into the blended image? If so, can you please elaborate what you mean by plotting, like what kind of plots do you need?

rmassei commented 6 months ago

Issue 1: Unfortunately, I do not have further info. I tried to perform a linear blending of the color map and the original image using the overlay tool but the problem is that the color map is RGB Color and i cannot find a way to convert it in a 8-bit RGB before blending

Issue 3: sorry, I explained myself badly. The issue is extracting 2d features from the voronoi segmentation. I tried to run the tool after the arithmetic expression but I received the following fatal error:

Matplotlib is building the font cache; this may take a moment.
Traceback (most recent call last):
  File "/opt/galaxy/shed_tools/toolshed.g2.bx.psu.edu/repos/imgteam/2d_feature_extraction/2436a8807ad1/2d_feature_extraction/2d_feature_extraction.py", line 62, in <module>
    raw_label_image = skimage.io.imread(label_file)
  File "/usr/local/tools/_conda/envs/mulled-v1-87cd3d652f9000594ed4460deda0c803a677af5435a69dd3a0d9f10a265c901f/lib/python3.7/site-packages/skimage/io/_io.py", line 62, in imread
    img = call_plugin('imread', fname, plugin=plugin, **plugin_args)
  File "/usr/local/tools/_conda/envs/mulled-v1-87cd3d652f9000594ed4460deda0c803a677af5435a69dd3a0d9f10a265c901f/lib/python3.7/site-packages/skimage/io/manage_plugins.py", line 214, in call_plugin
    return func(*args, **kwargs)
  File "/usr/local/tools/_conda/envs/mulled-v1-87cd3d652f9000594ed4460deda0c803a677af5435a69dd3a0d9f10a265c901f/lib/python3.7/site-packages/skimage/io/_plugins/pil_plugin.py", line 36, in imread
    im = Image.open(f)
  File "/usr/local/tools/_conda/envs/mulled-v1-87cd3d652f9000594ed4460deda0c803a677af5435a69dd3a0d9f10a265c901f/lib/python3.7/site-packages/PIL/Image.py", line 3148, in open
    "cannot identify image file %r" % (filename if filename else fp)
PIL.UnidentifiedImageError: cannot identify image file <_io.BufferedReader name='/data/dnb09/galaxy_db/files/5/0/8/dataset_50856838-1c7c-4d8f-aff1-c05fa77bcb4b.dat'>
kostrykin commented 6 months ago

OK, I think I mostly got it now.

Can you please provide a history of an example execution of Workflow 2, shared via a link? Or, alternatively, a full set of inputs (input files and values). I will then look into it.

rmassei commented 6 months ago

Here the execution: https://usegalaxy.eu/u/rmassei88/h/voronoitest

kostrykin commented 6 months ago

Thanks!

A few quick notes regarding Issue 3:

I will look into Issue 1 later.

kostrykin commented 6 months ago

Another issue you have:

This is why your 18: Colorize label map on data 17 looks strange.

kostrykin commented 6 months ago

Regarding Issue 1:

@rmassei Let me know if you need help plugging it all together.

rmassei commented 6 months ago

Cannot find find the convert tool, is it already there? :D

kostrykin commented 6 months ago

New tools and tool versions usually become available on Mondays


Edit: It's there now https://github.com/usegalaxy-eu/usegalaxy-eu-tools/pull/712

rmassei commented 6 months ago

Just receiving this error: Screenshot from 2024-04-29 13-26-59

bgruening commented 6 months ago

Can you please try again?

rmassei commented 6 months ago

everything worked out:

linear_blending

bgruening commented 6 months ago

Looks like modern art to me ... is everything working, like everything? :)

rmassei commented 6 months ago

Yep also feature extraction worked out perfectly :+1: Overall, a background removal step is missing since it seems not necessary to improve the image quality but maybe can it be useful for other processing workflows?

https://haesleinhuepf.github.io/BioImageAnalysisNotebooks/18_image_filtering/03_background_removal.html

I guess a similar effect can be achieve by using the arithmetic expression tool, or am I wrong?

kostrykin commented 6 months ago

I guess a similar effect can be achieve by using the arithmetic expression tool, or am I wrong?

Yes, background removal using a Gaussian filter (see the link you posted) can already be achieved using the tools on-board, as we have Gaussian filters and we have division using arithmetic expressions. More specialized techniques like top-hat, rolling ball, or rank filters can be provided without too much effort, so let me know if this is needed.