This check needs weight scale's length is the same as out_c * in_c / group. But when I use a quantized pytorch pretrained model, such as torchvision.models.quantization.resnext101_32x8d, it's weight scale's length is [out_c]. I changed this requirement to out_c, it works fine.
For example:
When the input tensor with shape [1, 256, 56, 56], weight tensor with shape [256, 8, 3, 3], weight scale with shape [256], group is 32, then this can be failed because of this check: https://github.com/apache/tvm/blob/main/src/relay/qnn/op/convolution.cc#L81
This check needs weight scale's length is the same as
out_c * in_c / group
. But when I use a quantized pytorch pretrained model, such astorchvision.models.quantization.resnext101_32x8d
, it's weight scale's length is [out_c]. I changed this requirement to out_c, it works fine.Thank you.