Closed Nikronic closed 5 years ago
I am still not sure what values should be used. In the pytorch community, it has been mentioned that using
normalize = transforms.Normalize(mean=[0.5, 0.5, 0.5],
std=[0.5, 0.5, 0.5])
converts [0,1]
images into [-1,1]
and this is our goal too.
By the way, we used VGG as part of the loss function, and it seems, it uses different normalization numbers.
All pytorch vision models use following normalization: https://github.com/pytorch/examples/blob/master/imagenet/main.py#L197-L198 Which is obtained from here: https://discuss.pytorch.org/t/how-to-preprocess-input-for-pre-trained-networks/683/2?u=nikronic
Using normalization in custom transforms required as below: