To test Topformer's inference speed in mobile device, I followed tnn_runtime.md guide.
But when I try
python3 tools/convert2onnx.py <config-file> --input-img <img-dir> --shape 512 512 --checkpoint <model-ckpt>
to convert model to ONNX, I got error
torch.onnx.errors.SymbolicValueError: Unsupported: ONNX export of operator adaptive_avg_pool2d, output size that are not factor of input size. Please feel free to request support or submit a pull request on PyTorch GitHub: https://github.com/pytorch/pytorch/issues [Caused by the value '497 defined in (%497 : Long(2, strides=[1], device=cpu) = onnx::Constant[value= 4 4 [ CPULongType{2} ]]()
)' (type 'Tensor') in the TorchScript graph. The containing node has kind 'onnx::Constant'.]
Inputs:
Empty
Outputs:
#0: 497 defined in (%497 : Long(2, strides=[1], device=cpu) = onnx::Constant[value= 4 4 [ CPULongType{2} ]]()
) (type 'Tensor')
There seems to be a problem with the adaptive_avg_pool2d part used in the Topformer, so it cannot be converted to ONNX. How did you solve this problem?
To test Topformer's inference speed in mobile device, I followed tnn_runtime.md guide. But when I try
python3 tools/convert2onnx.py <config-file> --input-img <img-dir> --shape 512 512 --checkpoint <model-ckpt>
to convert model to ONNX, I got errorThere seems to be a problem with the adaptive_avg_pool2d part used in the Topformer, so it cannot be converted to ONNX. How did you solve this problem?