Closed mfigurnov closed 8 years ago
Thanks :) sorry about that
@mfigurnov interestingly, I had to change this via https://github.com/soumith/imagenet-multiGPU.torch/commit/8d59ca49d97595a66e977e5b1c936a84c595ac88
It turns out that without batch-normalization, the normalization is so sensitive, it's the difference between converging (loss goes down) and not converging....
Currently, images are mean/std normalized two times: in
loadImage
andtrainHook
/testHook
. Looks like copy-paste bug :)