pytorch / vision

Datasets, Transforms and Models specific to Computer Vision
https://pytorch.org/vision
BSD 3-Clause "New" or "Revised" License
16.23k stars 6.95k forks source link

Preprocessing for pretrained models? #39

Closed jcjohnson closed 7 years ago

jcjohnson commented 7 years ago

What kind of image preprocessing is expected for the pretrained models? I couldn't find this documented anywhere.

If I had to guess I would assume that they expect RGB images with the mean/std normalization used in fb.resnet.torch and pytorch/examples/imagenet. Is this correct?

soumith commented 7 years ago

yes, the mean/std normalization that is used in pytorch/examples/imagenet is what is expected. I'll document it now.

Atcold commented 7 years ago

@soumith, are you referring to this documentation -> http://pytorch.org/docs/torchvision/models.html I cannot find any reference to preprocessing the images. I think the network object should have a preprocessing attribute, where those values are stored. Moreover, they should also have a classes attribute, that let you go from the output max index to the class name. As they are right now they are hardly usable. Finally, most of the times, these nets are retrained, so it would be nice to have a method which allows you to replace the final classifier.

Here is a link to the required preprocessing -> https://github.com/pytorch/examples/blob/master/imagenet/main.py#L92-L93

soumith commented 7 years ago

documented in the README of vision now.

https://github.com/pytorch/vision/blob/master/README.rst#models

jianchao-li commented 6 years ago

Reply for easy reference

normalize = transforms.Normalize(mean=[0.485, 0.456, 0.406],
                                 std=[0.229, 0.224, 0.225])
youkaichao commented 6 years ago

is it better that we can keep the mean and std inside the torchvision models? It is annoying to keep some magic numbers inside the code.

fmassa commented 6 years ago

@youkaichao this is a good point, and the pre-trained models should have something like that. But that's not all of it, as there are other underlying assumptions that are made as well that should be known (image is RGB in 0-1 range, even though that's the current default in PyTorch). But I'm open to suggestions. I'm not sure where we should include such information: should it be in the state_dict of the serialized models (that can be read specially some mechanism)? Should it be hard-coded in the model implementation?

youkaichao commented 6 years ago

@fmassa how about registering mean and std as a buffer? As for the input range, I think you can print out a line that says "accepted images are in range [0, 1]" at initialization.

fmassa commented 6 years ago

Registering them as a buffer is an option, but that also means that we would either need to change the way we do image normalization (which is currently handled in a transform) and do it in the model, or find a way of loading the state dict into a transform.

Both solutions are backwards-incompatible, so I'm not very happy with them...

youkaichao commented 6 years ago

@fmassa you can add a parameter at __init__ like pre_process=False, the default value for backwards compatibility, and if pre_process==True, use the registered buffer. this way, users can use pre-defined preprocessing just by setting a boolean flag, which seems much better than searching for the exact mean and std value everywhere

fmassa commented 6 years ago

well, the good thing about torchvision models is that (almost) all of them have the same pre-processing values.

Also, it's a bit more involved than that, because before one could just load the model using load_state_dict, but now if we add extra buffers, old users might need to load it using strict=False, or else their loading part will crash.

gursimar commented 6 years ago

Hi, I want to extract features from pre-trained resnet pool5 and res5c layer. I'm using extracted frames (RGB values) from the TGIF-QA dataset (gifs).

  1. Should I transform my image using the values specified above?
  2. I'm using the following preprocessing. Does this okay for my purpose?
loader = transforms.Compose([
    transforms.Resize(256),
    transforms.CenterCrop(IMAGE_SIZE),
    transforms.ToTensor(),
    transforms.Normalize(mean=[0.485, 0.456, 0.406],
                         std=[0.229, 0.224, 0.225])
])
fmassa commented 6 years ago

@gursimar yes, it should be fine

yashrathi-git commented 2 years ago

Hey @Atcold the link no longer works. I still cannot find the documentation for the pre-processing transforms used for various pre-trained models in torchvision. I think the transforms should be included with the model. Would I get better performance if while fine-tuning I use the same transforms or it doesn't matters?

Do all pretrained models in torchvision use the same pre-processing transforms as described by jianchao-li?

Atcold commented 2 years ago

The new link -> https://pytorch.org/vision/stable/models.html

datumbox commented 2 years ago

I think the transforms should be included with the model.

They are on the new Multi-weights API. Currently on prototype and you can read more here: https://github.com/pytorch/vision/blob/d8654bb0d84fd2ba8b42cd58d881523821a6214c/torchvision/prototype/models/resnet.py#L113

We plan to roll it out within the next couple of weeks on main TorchVision. We have dedicated issue for feedback.