Open corehalt opened 2 days ago
@jerryzh168 @HDCharles Is there another API people should use from torchao?
@janeyx99 @jerryzh168
Error mentioned above happens quite often with many different models, to mention a couple of them:
Model: fastvit_mci0.apple_mclip
Error: cannot mutate tensors with frozen storage
Model: darknet53.c2ns_in1k
Error: cannot mutate tensors with frozen storage
Model: torchvision.resnet50
Error: cannot mutate tensors with frozen storage
Model: torchvision.mobilenet_v2
Error: cannot mutate tensors with frozen storage
Another common error I encounter during quantization of other models is the following (seems that aten.sub
ends up with pairs of inputs of data type [int32, int64])
Model: resnet50_clip.openai
Error: These operators are taking Tensor inputs with mismatched dtypes: defaultdict(<class 'dict'>, {<EdgeOpOverload: aten.sub.Tensor>: schema = aten::sub.Tensor(Tensor self, Tensor other, *, Scalar alpha=1) -> Tensor: {'self': torch.int32, 'other': torch.int64, '__ret_0': torch.int32}})
Model: beit_base_patch16_224.in22k_ft_in22k
Error: These operators are taking Tensor inputs with mismatched dtypes: defaultdict(<class 'dict'>, {<EdgeOpOverload: aten.sub.Tensor>: schema = aten::sub.Tensor(Tensor self, Tensor other, *, Scalar alpha=1) -> Tensor: {'self': torch.int32, 'other': torch.int64, '__ret_0': torch.int32}})
Model: caformer_b36.sail_in1k
Error: These operators are taking Tensor inputs with mismatched dtypes: defaultdict(<class 'dict'>, {<EdgeOpOverload: aten.sub.Tensor>: schema = aten::sub.Tensor(Tensor self, Tensor other, *, Scalar alpha=1) -> Tensor: {'self': torch.int32, 'other': torch.int64, '__ret_0': torch.int32}})
Model: deit3_base_patch16_384.fb_in1k
Error: These operators are taking Tensor inputs with mismatched dtypes: defaultdict(<class 'dict'>, {<EdgeOpOverload: aten.sub.Tensor>: schema = aten::sub.Tensor(Tensor self, Tensor other, *, Scalar alpha=1) -> Tensor: {'self': torch.int32, 'other': torch.int64, '__ret_0': torch.int32}})
Model: vit_base_patch14_dinov2.lvd142m
Error: These operators are taking Tensor inputs with mismatched dtypes: defaultdict(<class 'dict'>, {<EdgeOpOverload: aten.sub.Tensor>: schema = aten::sub.Tensor(Tensor self, Tensor other, *, Scalar alpha=1) -> Tensor: {'self': torch.int32, 'other': torch.int64, '__ret_0': torch.int32}})
🐛 Describe the bug
After quantizing ResNet-18 model with PyTorch 2 Export Post Training Quantization it is not possible to export the model.
Get the error:
Versions
@jerryzh168
cc @jerryzh168 @jianyuh @raghuramank100 @jamesr66a @vkuzo @jgong5 @Xia-Weiwen @leslie-fang-intel @msaroufim