filipetrocadoferreira / end2endlobesegmentation

Code to replicate Experiments from the paper 'End-to-End Supervised Lobe Segmentation'
40 stars 12 forks source link

why "K.set_learning_phase(1) " in inference mode? #4

Closed Yuxiang1990 closed 5 years ago

Yuxiang1990 commented 5 years ago

Hi, may I have your help? in run_single_segmentation.py, when set K.set_learning_phase(0), get bad result.tks.

filipetrocadoferreira commented 5 years ago

That's a good question and I was not able to fully understand it.

We know Dropout and BN works different in training and test phase, but somehow, in test phase the results are not congruent.

Yuxiang1990 commented 5 years ago

tks for your reply. through experience, i exclude the dropout effect. there is some special with your training process? i am really confused and curious.

filipetrocadoferreira commented 5 years ago

BatchNormalization in this case my be called instance normalization because the batch size is small. I don't know if we change from instance to dataset normalization the results change so much.

Yuxiang1990 commented 5 years ago

i got it, tks

JayJJChen commented 5 years ago

guys, I tried to replace BN in the model with Instance norm from this implementation, turns out I can get the same output when I don't set learning phase to 1. But this leads me to wonder what do you get in validation phase during training.

filipetrocadoferreira commented 5 years ago

I'm not working on this anymore. I can try to run some experiments on my spare time but I don't know when.