bodokaiser / mrtous

Generate US images from MR brain images.
BSD 3-Clause "New" or "Revised" License
0 stars 1 forks source link

improve model performance #28

Open bodokaiser opened 7 years ago

bodokaiser commented 7 years ago

One

Architecture

L1 Loss L2 Loss

Plots

plot 211 plot 213

Testing

one-testing1 one-testing2

Training

one-training1 one-training2 one-training3

Two

Architecture

L1 Loss L2 Loss

Plots

plot 207 plot 209

Testing

two-testing1 two-testing2

Training

two-training1 two-training2 two-training3

U-Net

Architecture

L1 Loss L2 Loss

Plots

plot 203 plot 205

Testing

unet-testing1 unet-testing2

Training

unet-training1 unet-training2 unet-training3
bodokaiser commented 7 years ago

U-Net (lr=1e-4)

L1 Loss L2 Loss

Plots

plot 215 plot 217

Testing

unet1-testing1 unet2-testing2

Training

unet2-training1 unet2-training2 unet2-training3 unet2-training4
bodokaiser commented 7 years ago

@albarqouni I started run with lr=1e-5. It should be finished tomorrow. If you want to check image progress over epochs you can visit http://52.15.106.125:3000/ until tomorrow.

bodokaiser commented 7 years ago

U-Net (lr=1e-5)

L1 Loss L2 Loss

Plots

plot 219 plot 221

Testing

unet3-testing1 unet3-testing2

Training

unet3-training1 unet3-training2 unet3-training3 unet3-training4
albarqouni commented 7 years ago

What kind of pre-processing you have? Normalization ... etc? Do you have any batch-normalization? Have you checked the gradient histogram?

bodokaiser commented 7 years ago

I apply these transforms:

  1. input images (MR)

    • clipping ([–32 768, 32 767] -> [-32 000, 0] -> [0, 32 000])
    • histnorm (nbins=256)
  2. target images (US)

    • histnorm (nbins=256)

I use skimage.exposure.equalize_hist for histogram normalization.

bodokaiser commented 7 years ago

No, that u-net model has no BatchNorm layers (it's as close as possible to the original one).

What do you mean with gradient histogram?

albarqouni commented 7 years ago

I wouldn't do any histogram equalization at the moment. You might need to have a Batch-Normalization layer before the first convolutional layer. Hopefully, that helps!

bodokaiser commented 7 years ago

What about having BatchNorm after every Conv layer?

albarqouni commented 7 years ago

That's possible. However, we don't wanna change the architecture at the moment. The first layer acts as a pre-processing step.

bodokaiser commented 7 years ago

U-Net

L1 Loss L2 Loss

Plots

L1 Loss L2 Loss

Testing

unet-bn-testing1 unet-bn-testing2

Training

unet-bn-training1 unet-bn-training2 unet-bn-training3 unet-bn-training4