lmb-freiburg / Unet-Segmentation

The U-Net Segmentation plugin for Fiji (ImageJ)
https://lmb.informatik.uni-freiburg.de/resources/opensource/unet
GNU General Public License v3.0
86 stars 25 forks source link

Normalization of pixel resolution #60

Closed sachahai closed 4 years ago

sachahai commented 4 years ago

Hello, Thanks a lot for the efficient 2D segmentation plugin you provide. Hard to find a pretrained model that performs already so good out of the box. I have few questions concerning points for which I didn't find answers in your supplementary notes.

1) Do you perform in the built-in preprocessing any kind of illumination correction, and would you advise to do so ?

2) The normalization of the pixel resolution to 0.5 x 0.5 µm leads to a down-sampling of my dataset which is of higher resolution. What kind of downsampling do you perform ? What kind of up-sampling of the mask should I do at the end of the pipeline to retrieve the same shape as input ?

3) It is not clear to me how the 2D segmentation model reacts to multi-channel input data ? I have a dataset with two channel (nucleus and cell (actin)). I fine-tuned two independent 1-channel network and only mix the outputs provability map in a custom post-processing part (shape and seeded watershed). Is there a way of having a single u-net outputting both nucleus and cytoplasm probability map, and would that lead to better results because both probability maps are based on both channels ?

Thank you very much for your help

ThorstenFalk commented 4 years ago
  1. No, we only normalize the intensity range to [0, 1]. Prior data normalization can help but be aware that you reduce the variability of the training data, so after choosing to apply illumination correction you must apply it to all subsequent images because the network was not able to learn to ignore uneven illumination.

If you want a maximally flexible model I advice to use the raw data and let the network learn to cope with all kinds of real-world effects.With very few training data for a specific experiment, illumination correction (and other kinds of normalizations) may be a good option for maximum performance though.

  1. The plugin performs bilinear interpolation during downsampling. I would upsample the score maps using bilinear or bicubic interpolation and then apply argmax over the channel dimension to obtain the final class labels. Alternatively you can finetune the model to your element size by clicking the "From image" button in the model element size selection of the training/finetuning dialog.

  2. Multi-channel input data are no problem, the channels are individually normalized and both used for training if available. Probably you will need more finetuning iterations because the additional weights in the first layer are randomly initialized. Your approach is perfectly valid, but I would at least try to train one network for both classes at once. This can be aesily done by naming the ROIs of your annotations accordingly. E.g. all ROIs for nuclei should be named "nucleus" and all ROIs for cytoplasm should be named "cytoplasm". It does not matter in which channel the annotations are placed.

sachahai commented 4 years ago

Thanks a lot for your answers, it helped me a lot. I will give a try to the multi-channel input network and compare the results.

I have one more little question, in the original U-net paper of 2015, you presented in the figure below an approach including a weighted loss, enabling a single object instance-awareness even if the segmentation task is only a foreground/background differentiation :

Capture d’écran 2020-03-21 à 19 55 35

Is there a way of using this approach with the u-net plugin in imageJ ? It would enhance a lot single cell segmentation capability if the cell's border are well learnt, and would require a less important post processing. Or is this weighted loss already implemented/computed from the ROI-annotation we used when we fine-tune a network ?

Thanks a lot again

ThorstenFalk commented 4 years ago

This is already implemented and enabled by default, if you wanted to disable you‘d have to train an entirely new model.