Closed okt-wang closed 2 years ago
Would you please share the module definition of the object self.spp_scale4_0
? It would be better to know the input shape of it as well.
It is torch's average pooling layer, and my input shape is dummy_input_0 = torch.ones((1, 3, 720, 1280), dtype=torch.float32)
self.spp_scale4_0 = torch.nn.AvgPool2d(kernel_size=1, stride=1, padding=0)
@Ouskit Actually, I need the input shape of the tensor (add_18
) that is fed into this AvgPool2d layer.
BTW, here is the code for the checks while performing avg_pool2d in QNNPACK.
@Ouskit I get the message Error in QNNPACK: failed to create average pooling with 1 pooling element: 1x1 pooling is meaningless
when invoking the following code. So it seems you can just comment it out in ddrnet_qat.py
and pass config={'force_overwrite': False}
to the quantizer.
import torch
from tinynn.converter import TFLiteConverter
from tinynn.graph.quantization.quantizer import QATQuantizer
class Model(torch.nn.Module):
def __init__(self) -> None:
super().__init__()
self.spp_scale4_0 = torch.nn.AvgPool2d(kernel_size=1, stride=1, padding=0)
def forward(self, x):
return self.spp_scale4_0(x)
def main():
model = Model()
model.eval()
dummy_input = torch.ones(1, 3, 720, 1280)
quantizer = QATQuantizer(model, dummy_input, work_dir='out')
qat_model = quantizer.quantize()
qat_model(dummy_input)
with torch.no_grad():
qat_model.eval()
qat_model.cpu()
qat_model = torch.quantization.convert(qat_model)
torch.backends.quantized.engine = quantizer.backend
converter = TFLiteConverter(qat_model, dummy_input, tflite_path='out/qat_model.tflite')
converter.convert()
if __name__ == '__main__':
main()
@Ouskit With https://github.com/alibaba/TinyNeuralNetwork/commit/7d2a2098cf56cd06df8362b76f3e28cd622f8eaf, the pooling nodes with kernel size=1 will be rewritten to slices automatically. Please have a try.
Hi developer, Thank your great work. I want to use
QATQuantizer
to quantize my model, but in converter, a error appears:RuntimeError: [enforce fail at q_avgpool.cpp:369] createStatus == pytorch_qnnp_status_success. failed to create QNNPACK Average Pooling operator
I'm using Pytorch v1.10. Is this a QNNPACK issue? I try to use fbgemm and it works. Thank you!
Below is whole call stack: