Pre-trained models, data, code & materials from the paper "ImageNet-trained CNNs are biased towards texture; increasing shape bias improves accuracy and robustness" (ICLR 2019 Oral)
I'm trying to compute shape bias of my model by using cue-conflict images in stimuli/style-transfer-preprocessed-512 folder.
I would like to know what you did for preprocessing these images.
Did you use standard ImageNet normalization like you mentioned in README?
mean = [0.485, 0.456, 0.406]
std = [0.229, 0.224, 0.225]
Or should I recalculate and use mean and std of cue-conflict images? Below are the mean and std of them that I calculated by myself:
mean = [0.5374, 0.4923, 0.4556]
std=[0.2260, 0.2207, 0.2231]
Additionally, I don't think we need Resize(256) and CenterCrop(224) with these images because their size is already 224 x 224 (height, width).
So I changed test transformation from:
Hi Robert Geirhos,
I'm trying to compute shape bias of my model by using cue-conflict images in
stimuli/style-transfer-preprocessed-512
folder.I would like to know what you did for preprocessing these images.
Did you use standard ImageNet normalization like you mentioned in README?
Or should I recalculate and use mean and std of cue-conflict images? Below are the mean and std of them that I calculated by myself:
Additionally, I don't think we need Resize(256) and CenterCrop(224) with these images because their size is already 224 x 224 (height, width). So I changed test transformation from:
to:
I would like to know your settings.
Best Regards,
Sou Yoshihara Master student at Kyoto university