yangsenius / TransPose

PyTorch Implementation for "TransPose: Keypoint localization via Transformer", ICCV 2021.
https://github.com/yangsenius/TransPose/releases/download/paper/transpose.pdf
MIT License
360 stars 58 forks source link

Static quantization of TransPose model in PyTorch? NotImplementedError: Could not run ‘aten::add.out’ #36

Open mukeshnarendran7 opened 2 years ago

mukeshnarendran7 commented 2 years ago

I am trying to do a static quantization of the transpose a4 model in PyTorch but I run into this error. It's coming from the aten::add.out operation. How can I modify this in the pre-trained model to make it suitable to test quantization? Thanks

---> 8 model_static_quantized(x).shape

7 frames

/root/.cache/torch/hub/yangsenius_TransPose_main/lib/models/transpose_h.py in forward(self, x) 99 residual = self.downsample(x) 100 → 101 out += residual 102 out = self.relu(out) 103

NotImplementedError: Could not run ‘aten::add.out’ with arguments from the ‘QuantizedCPU’ backend. This could be because the operator doesn’t exist for this backend, or was omitted during the selective/custom build process (if using custom build). If you are a Facebook employee using PyTorch on mobile, please visit Internal Login for possible resolutions. ‘aten::add.out’ is only available for these backends: [CPU, CUDA, Meta, MkldnnCPU, SparseCPU, SparseCUDA, SparseCsrCPU, SparseCsrCUDA, BackendSelect, Python, Named, Conjugate, Negative, ADInplaceOrView, AutogradOther, AutogradCPU, AutogradCUDA, AutogradXLA, AutogradLazy, AutogradXPU, AutogradMLC, AutogradHPU, AutogradNestedTensor, AutogradPrivateUse1, AutogradPrivateUse2, AutogradPrivateUse3, Tracer, UNKNOWN_TENSOR_TYPE_ID, Autocast, Batched, VmapMode].