sorry for bothering. In the example.lua I saw the following:
img:mul(255):clamp(0, 255):add(-117)
The mul(255) is to blow up the values from the range 0...1 to 0...255.
The add(-117) is to remove the mean of all the imagenet images I suppose.
I noticed that you do not divide by the std-deviation. Is this just a simplification for this example or not needed in general? If we should do the normalization, what value do you suggest to take (std_dev over all imagenet images)?
@felixsmueller i do not divide by the std-deviation because that's what the google network seemed to do in training. Usually for all my trained networks, I normalize to 0-mean and stdv-1
Hi
sorry for bothering. In the example.lua I saw the following: img:mul(255):clamp(0, 255):add(-117) The mul(255) is to blow up the values from the range 0...1 to 0...255. The add(-117) is to remove the mean of all the imagenet images I suppose. I noticed that you do not divide by the std-deviation. Is this just a simplification for this example or not needed in general? If we should do the normalization, what value do you suggest to take (std_dev over all imagenet images)?
Regards, Felix