xmba15 / onnx_runtime_cpp

small c++ library to quickly deploy models using onnxruntime
MIT License
327 stars 49 forks source link

About loftr.onnx #41

Closed yeluoo closed 1 year ago

yeluoo commented 1 year ago

Hello, it was not when you transferred the .onnx model, what operator was rewritten, I encountered a problem when using the loftr.onnx you provided to convert the dlc file Traceback (most recent call last): File "/data01/software/snpe/lib/python/qti/aisw/converters/onnx/onnx_to_ir.py", line 330, in convert src_op.op_type) File "/data01/software/snpe/lib/python/qti/aisw/converters/common/converter_ir/translation.py", line 48, in apply_method_to_op translation = self.__get_translation(op_type) File "/data01/software/snpe/lib/python/qti/aisw/converters/common/converter_ir/translation.py", line 35, in __get_translation raise KeyError("No translation registered for op type {}.".format(op_type)) KeyError: 'No translation registered for op type onnx_einsum.' 2023-03-29 14:57:0

xmba15 commented 1 year ago

Sorry. This is out of scope of this repository. This library does not support conversion of onnx weights to other formats. My guess is that snapdragon framework does not support torch einsum operator and you have to write the operator yourself.