smistad / FAST

A framework for high-performance medical image processing, neural network inference and visualization
https://fast.eriksmistad.no
BSD 2-Clause "Simplified" License
433 stars 101 forks source link

Zero mean intensity normalization #172

Open MarkusDrange opened 1 year ago

MarkusDrange commented 1 year ago

As a pre-processing step, i do zero mean intensity normalization over all three RGB channels, with the same mean and standard deviation for all patches:

    ADP_MEAN = [0.81233799, 0.64032477, 0.81902153]
    ADP_STD = [0.18129702, 0.25731668, 0.16800649]

    [...]
    transforms.Normalize(ADP_MEAN, ADP_STD)

I understand this is not supported in FAST-Pathology pre processing; would it be possible to add this functionality?

andreped commented 1 year ago

AFAIK, the NeuralNetwork PO has a setMeanAndStandardDeviation method, but it is not possible to set the mean and std values through an Attribute (see here).

It also looks like the mean and std values are floats, and hence it does not support multi-channel input, that is to apply a different mean and std for each channel individually.

smistad commented 1 year ago

Multi-channel values are not supported atm. But if the values are fixed you can just add it as a layer to your model.. so that the first layer in your network performs this normalization on the input image before passing it on to the rest of the layers.

smistad commented 1 year ago

This functionality can be added to the already existing ZeroMeanUnitVariance PO: https://github.com/smistad/FAST/blob/master/source/FAST/Algorithms/IntensityNormalization/ZeroMeanUnitVariance.hpp

andreped commented 1 year ago

This functionality can be added to the already existing ZeroMeanUnitVariance PO: https://github.com/smistad/FAST/blob/master/source/FAST/Algorithms/IntensityNormalization/ZeroMeanUnitVariance.hpp

Yes, I saw this one, but I believe it has the same problem as the NeuralNetwork PO. It is only designed for single-channel images, as per this. Also one would need to add support for setting mean and std, which the NeuralNetwork PO already supports, so wouldn't it make more sense to add it to the NeuralNetwork PO? Or both?

andreped commented 1 year ago

But if the values are fixed you can just add it as a layer to your model.. so that the first layer in your network performs this normalization on the input image before passing it on to the rest of the layers.

This I have done for TF models before. Works wonders.

Have you tried the same in PyTorch, @MarkusDrange? Did it resolve the issue?

MarkusDrange commented 1 year ago

Yes! Worked perfectly after adding it to the forward function of my backbone.

smistad commented 1 year ago

This functionality can be added to the already existing ZeroMeanUnitVariance PO: https://github.com/smistad/FAST/blob/master/source/FAST/Algorithms/IntensityNormalization/ZeroMeanUnitVariance.hpp

Yes, I saw this one, but I believe it has the same problem as the NeuralNetwork PO. It is only designed for single-channel images, as per this. Also one would need to add support for setting mean and std, which the NeuralNetwork PO already supports, so wouldn't it make more sense to add it to the NeuralNetwork PO? Or both?

I meant that this functionality can be implemented and added to the ZeroMeanUnitVariance PO.