pytorch / vision

Datasets, Transforms and Models specific to Computer Vision
https://pytorch.org/vision
BSD 3-Clause "New" or "Revised" License
16.32k stars 6.97k forks source link

quantized fuse_model should have an inplace argument #1569

Open z-a-f opened 5 years ago

z-a-f commented 5 years ago

Currently if the user wants to have the fused model while preserving the original, they have to do

import copy
import torchvision.models.quantization as models

model = models.resnet18(pretrained=True, progress=True, quantize=False)
model_fused = copy.deepcopy(model)
model_fused.fuse_model()

Given that the quantization API provides an inplace variant for the torch.quantization.fuse_modules, we need to have an option for inplace in the model methods as well.

fmassa commented 5 years ago

This sounds ok to me, as it will only be exposed in he fuse_model method from quantized models