Closed jcjohnson closed 7 years ago
yes, the mean/std normalization that is used in pytorch/examples/imagenet is what is expected. I'll document it now.
@soumith, are you referring to this documentation -> http://pytorch.org/docs/torchvision/models.html
I cannot find any reference to preprocessing the images.
I think the network object should have a preprocessing
attribute, where those values are stored. Moreover, they should also have a classes
attribute, that let you go from the output max index to the class name.
As they are right now they are hardly usable.
Finally, most of the times, these nets are retrained, so it would be nice to have a method which allows you to replace the final classifier.
Here is a link to the required preprocessing -> https://github.com/pytorch/examples/blob/master/imagenet/main.py#L92-L93
documented in the README of vision now.
https://github.com/pytorch/vision/blob/master/README.rst#models
Reply for easy reference
normalize = transforms.Normalize(mean=[0.485, 0.456, 0.406],
std=[0.229, 0.224, 0.225])
preprocessing = transforms.Compose([
transforms.RandomSizedCrop(224),
transforms.RandomHorizontalFlip(),
transforms.ToTensor(),
normalize,
])
preprocessing = transforms.Compose([
transforms.Scale(256),
transforms.CenterCrop(224),
transforms.ToTensor(),
normalize,
])
is it better that we can keep the mean and std inside the torchvision models? It is annoying to keep some magic numbers inside the code.
@youkaichao this is a good point, and the pre-trained models should have something like that.
But that's not all of it, as there are other underlying assumptions that are made as well that should be known (image is RGB in 0-1 range, even though that's the current default in PyTorch).
But I'm open to suggestions. I'm not sure where we should include such information: should it be in the state_dict
of the serialized models (that can be read specially some mechanism)? Should it be hard-coded in the model implementation?
@fmassa how about registering mean and std as a buffer? As for the input range, I think you can print out a line that says "accepted images are in range [0, 1]" at initialization.
Registering them as a buffer is an option, but that also means that we would either need to change the way we do image normalization (which is currently handled in a transform) and do it in the model, or find a way of loading the state dict into a transform.
Both solutions are backwards-incompatible, so I'm not very happy with them...
@fmassa you can add a parameter at __init__
like pre_process=False
, the default value for backwards compatibility, and if pre_process==True
, use the registered buffer. this way, users can use pre-defined preprocessing just by setting a boolean flag, which seems much better than searching for the exact mean and std value everywhere
well, the good thing about torchvision models is that (almost) all of them have the same pre-processing values.
Also, it's a bit more involved than that, because before one could just load the model using load_state_dict
, but now if we add extra buffers, old users might need to load it using strict=False
, or else their loading part will crash.
Hi, I want to extract features from pre-trained resnet pool5 and res5c layer. I'm using extracted frames (RGB values) from the TGIF-QA dataset (gifs).
loader = transforms.Compose([
transforms.Resize(256),
transforms.CenterCrop(IMAGE_SIZE),
transforms.ToTensor(),
transforms.Normalize(mean=[0.485, 0.456, 0.406],
std=[0.229, 0.224, 0.225])
])
@gursimar yes, it should be fine
Hey @Atcold the link no longer works. I still cannot find the documentation for the pre-processing transforms used for various pre-trained models in torchvision. I think the transforms should be included with the model. Would I get better performance if while fine-tuning I use the same transforms or it doesn't matters?
Do all pretrained models in torchvision use the same pre-processing transforms as described by jianchao-li?
The new link -> https://pytorch.org/vision/stable/models.html
I think the transforms should be included with the model.
They are on the new Multi-weights API. Currently on prototype and you can read more here: https://github.com/pytorch/vision/blob/d8654bb0d84fd2ba8b42cd58d881523821a6214c/torchvision/prototype/models/resnet.py#L113
We plan to roll it out within the next couple of weeks on main TorchVision. We have dedicated issue for feedback.
What kind of image preprocessing is expected for the pretrained models? I couldn't find this documented anywhere.
If I had to guess I would assume that they expect RGB images with the mean/std normalization used in fb.resnet.torch and pytorch/examples/imagenet. Is this correct?