Open 1615070057 opened 1 year ago
+1, have you solve this error? @1615070057
This problem is caused by the fact that onnxruntime does not support reshape operation float16 quantization, I reinstalled onnxconverter-common (according to this version https://github.com/toothache/onnxconverter-common), reshape is saved using fp32, and other network layers are saved using fp16 . Although the size of the model is compressed by half in this way, the inference speed is slower.
------------------ 原始邮件 ------------------ 发件人: "microsoft/onnxconverter-common" @.>; 发送时间: 2023年9月6日(星期三) 中午11:38 @.>; @.**@.>; 主题: Re: [microsoft/onnxconverter-common] onnx.onnx_cpp2py_export.checker.ValidationError: Nodes in a graph must be topologically sorted, however input 'Resize__139_input_cast_1' of node: name: Resize__139 OpType: Resize is not output of any previous nodes. (Issue #261)
+1, have you solve this error? @1615070057
— Reply to this email directly, view it on GitHub, or unsubscribe. You are receiving this because you were mentioned.Message ID: @.***>
Use onnxconverter-common to quantize the model.onnx model to float16. When loading the fp16 model, the following error occurs. How should I solve it: onnx.onnx_cpp2py_export.checker.ValidationError: Nodes in a graph must be topologically sorted, however input 'Resize__139_input_cast_1' of nodes: name: Resize__139 OpType: Resize is not output of any previous nodes.