frankkramer-lab / MIScnn

A framework for Medical Image Segmentation with Convolutional Neural Networks and Deep Learning
GNU General Public License v3.0
406 stars 115 forks source link

Using trained weights for a new dataset #26

Open Itzikwa opened 4 years ago

Itzikwa commented 4 years ago

Hi Dr. Muller,

I tried the kits19 notebook and it runs pretty well. But because of my (relatively) weak GPU I had to choose only several samples trained on limited epochs. Which means the results weren't satisfactory unsurprisingly.

I was wondering, if i don't wrong, the weighs of the well trained network are been saving to a hdf5 file named model. Is there a way to use your trained weights to improve the network's performances? (Although my dataset is different from the kits19 dataset, I guess it will be useful, am I wrong?)

Another issue, how can I know which subfunctions, clipping and resampling have to be set for the preprocessor? (Tell me if you want me to open a separate issue for this question...)

My dataset contains like 30 jaw CT scans represented as nifti files.

Cheers, Itzik

muellerdo commented 4 years ago

Hi Itzik,

I tried the kits19 notebook and it runs pretty well. But because of my (relatively) weak GPU I had to choose only several samples trained on limited epochs. Which means the results weren't satisfactory unsurprisingly.

In addition, I am preparing a Jupyter Notebook right now for training a more powerful model for the KiTS19 data set with a smaller GPU (only 9GB VRAM). I will add this Notebook to the MIScnn examples when it's ready :)

I was wondering, if i don't wrong, the weighs of the well trained network are been saving to a hdf5 file named model. Is there a way to use your trained weights to improve the network's performances? (Although my dataset is different from the kits19 dataset, I guess it will be useful, am I wrong?)

Very good question! You are right, you can always save the weights of your fitted models on disk in hdf5 format (default format for keras model storage). Now the interesting part: Is it useful to reuse weights trained on different medical conditions.

I have to admit that I haven't done any testing in this section and don't know any paper in mind which analyse model reusability in medical imaginag. I personally would argue/think:

Another issue, how can I know which subfunctions, clipping and resampling have to be set for the preprocessor? (Tell me if you want me to open a separate issue for this question...)

This requires knowledge about your data set and experience in medical imaging.

Cheers, Dominik

Itzikwa commented 4 years ago

Hi,

Thank you very much for your answers (which were as usual very detailed and clear).

Now the interesting part: Is it useful to reuse weights trained on different medical conditions.

Even if using a pretrained model is "only" saving time, I would like to use that. So, if you can explain how it can be done it will be very helpful.

If you have CT data, I would also always recommend performing a clipping to your desired Hounsfield-Unit scale (different tissues, organs etc have different HU ranges. clipping to these ranges makes thing easier for the model).

I'm using cbct data (using usually for the oral region). I found out that it's not quite simple that Hounsfield-Unit scale is applicable on cbct images, am I wrong?

Resampling: For 3D volumes (MRI, CT) normally you have metadata like slice thickness and voxel spacings. The problem here for the model is that these voxel spacings are not normalized between samples which makes it harder for the model to learn certain features. Resampling them to a common voxel spacing is highly recommended if you have these meta data. (which is sometimes sadly not the case).

I think I got the general idea, but I'm still not understanding the meaning of the three numbers in the init function (3.22, 1.62, 1.62 in the kits example).

muellerdo commented 4 years ago

Thank you very much for your answers (which were as usual very detailed and clear).

No problem. I hope that they are helpful.

Even if using a pretrained model is "only" saving time, I would like to use that. So, if you can explain how it can be done it will be very helpful.

If you have a model, e.g. let's say we want to use the model from this lungs & COVID-19 infected region segmentation on CT data, we will proceed as following:

Initialize the Architecture

unet_standard = Architecture(depth=4, activation="softmax", batch_normalization=True)

Create the Neural Network model

model = Neural_Network(preprocessor=pp, architecture=unet_standard, loss=tversky_crossentropy, metrics=[tversky_loss, dice_soft, dice_crossentropy], batch_queue_size=3, workers=3, learninig_rate=0.001)

Load the model

model.load("/home/mudomini/covid19/models/model.fold_0.best_loss.hdf5")

Do stuff with your model

model.predict(sample_list)



> I'm using cbct data (using usually for the oral region). I found out that it's not quite simple that Hounsfield-Unit scale is applicable on cbct images, am I wrong?

Puh, interesting question. I personally have not worked with a cone beam CT data set, yet. 
You are right, that Hounsfield Units are biased in CBCT scans. 

This article says that there are some applications for processing CBCT grey values into HUs, but that these methods are more a "we-tried" and less a "this-is-the-solution".
https://www.ncbi.nlm.nih.gov/pmc/articles/PMC3520236/
But the paper is 10 years old. Maybe, the literature reveal some new techniques.

So far, I guess, it would be advisable to perform a Z-Score normalization on the images as usual and avoid using window clipping based on HUs. 

> I think I got the general idea, but I'm still not understanding the meaning of the three numbers in the init function (3.22, 1.62, 1.62 in the kits example).

The general idea is that you normalize the voxel spacing.
Each axis in a 3D image has a own voxel spacing axis.

E.g. 
CT scan shape with shape 120 x 512 x 512 can have a voxel spacing of 1 x 1.8 x 1.8.
Another CT scan with the shape 130 x 512 x 512 can have the voxel spacing of 1 x 1.6 x 1.6.
The problem now is that our neural network needs to have a homogeneous voxel spacing across all volumes in the data set. Therefore, we need to normalize them to have e.g. 1 x 1 x 1 or like in the kits19 example 3.22 x 1.62 x 1.62.
We can change the voxel spacing of an volume by changing the shape of it -> resizes it.

I find it a good metaphor when you think of image formats like 16:9 or 4:3. The images have different resolutions/shapes like 1920x1080 or 1280x720, but we want that they all have a format of e.g. 16:9. I hope that this metaphor is helpful and not more confusing :x

How to find the correct voxel spacing for your data set?
-> You should aim that your patch shape is about 1/5 up to 1/64 of the median image shape after the resampling.
With the help and contributions of  @MichaelLempart, I 'm right now running a new example for lung cancer segmentation based on a DICOM data set. For this Jupyter Notebook example, I also included a small section on how to identify the the correct voxel spacing.

Here is the section:
> ## Find out good voxel spacing
> 
> Due to inconsistent voxel spacings between CT volumes, it is required to normalize voxel spacing (resampling).
>
> The voxel spacing directly influences the volume size, therefore we have to identify a suited volume shape which is a good fit for > our patch shape.
> A good patch shape <-> median image ratio is 1/5 up to 1/64.
> ```py
> from miscnn.processing.subfunctions import Resampling
> sf_resample = Resampling((3.22, 1.62, 1.62))
> 
> from miscnn import Preprocessor
> pp_test = Preprocessor(data_io, data_aug=None, batch_size=1, subfunctions=[sf_resample], 
>                        prepare_subfunctions=True, prepare_batches=False, 
>                        analysis="fullimage")
> 
> from miscnn.neural_network.data_generator import DataGenerator
> data_gen = DataGenerator(sample_list, pp_test, training=False,
>                          shuffle=False, iterations=None)
> x = []
> y = []
> z = []
> for batch in data_gen:
>     x.append(batch.shape[1])
>     y.append(batch.shape[2])
>     z.append(batch.shape[3])
> 
> import numpy as np
> print(np.median(x), np.median(y), np.median(z))
> ```
> `124.5 308.0 308.0`
> 
> Patch shape: 80x160x160 = 2048000
> Median volume shape: 124.5x308x308 = 11810568
> 2048000 / 11810568 = 0.173404023
>
> The median image shape is looking good.

Maybe this makes the process a little bit more clearer.

Theoretically, I could implement a function which automatically calculates a suited voxel spacing for the data set for a desired ratio. 
I'm going to put this idea on my to-do agenda board.

Cheers,
Dominik