ClementPinard / FlowNetTorch

Torch implementation of Fischer et al. FlowNet training code
30 stars 6 forks source link

About the effect of batch normalization #8

Closed yxqlwl closed 7 years ago

yxqlwl commented 7 years ago

Hi, It seems that FlowNetSBN performs worse than FlowNetS with batch_size = 8. Is it because the batch_size is not large enough? Thanks!

ClementPinard commented 7 years ago

Hello thanks for your observation. Didn't try 8 batchSize but you may be right, as batchNormalization is doing a normalization over the whole batch. So the more data you have, the closer it will be to "running_std and "running_mean" which are the linear parameters when infering data.

Unfortunately i don't have time anymore to do a full training on this code. But if you have nice graphs showing better progress, feel free to do a PR !

yxqlwl commented 7 years ago

Thanks. I will try to add some graphs. :)