Tencent / TNN

TNN: developed by Tencent Youtu Lab and Guangying Lab, a uniform deep learning inference framework for mobile、desktop and server. TNN is distinguished by several outstanding features, including its cross-platform capability, high performance, model compression and code pruning. Based on ncnn and Rapidnet, TNN further strengthens the support and performance optimization for mobile devices, and also draws on the advantages of good extensibility and high performance from existed open source efforts. TNN has been deployed in multiple Apps from Tencent, such as Mobile QQ, Weishi, Pitu, etc. Contributions are welcome to work in collaborative with us and make TNN a better framework.
Other
4.42k stars 771 forks source link

Convert TensorFlow to ONNX model succeed! Converter ONNX to TNN model failed! #173

Open keson1984 opened 4 years ago

keson1984 commented 4 years ago

2020-07-10 03:54:45.644164832 [E:onnxruntime:, sequential_executor.cc:281 Execute] Non-zero status code returned while running ReduceSum node. Name:'Postprocessor/BatchMultiClassNonMaxSuppression/map/while/MultiClassNonMaxSuppression/Sum' Status Message: /onnxruntime_src/onnxruntime/core/providers/cpu/reduction/reduction_ops.cc:276 bool onnxruntime::PrepareForReduce(const onnxruntime::Tensor, onnxruntime::FastAllocVector&, int64_t&, int64_t&, const std::vector&, bool, std::vector&, bool, const onnxruntime::TensorShape) [with T = int; onnxruntime::FastAllocVector = std::vector<int, onnxruntime::OrtStlAllocator >; int64_t = long int] in_dim != 0 was false. Can't reduce on dim with value of 0 if 'keepdims' is false.Invalid output shape would be produced. input_shape:{0}

2020-07-10 03:54:45.644291607 [E:onnxruntime:, sequential_executor.cc:281 Execute] Non-zero status code returned while running Loop node. Name:'generic_loop_Loop__57' Status Message: Non-zero status code returned while running ReduceSum node. Name:'Postprocessor/BatchMultiClassNonMaxSuppression/map/while/MultiClassNonMaxSuppression/Sum' Status Message: /onnxruntime_src/onnxruntime/core/providers/cpu/reduction/reduction_ops.cc:276 bool onnxruntime::PrepareForReduce(const onnxruntime::Tensor, onnxruntime::FastAllocVector&, int64_t&, int64_t&, const std::vector&, bool, std::vector&, bool, const onnxruntime::TensorShape) [with T = int; onnxruntime::FastAllocVector = std::vector<int, onnxruntime::OrtStlAllocator >; int64_t = long int] in_dim != 0 was false. Can't reduce on dim with value of 0 if 'keepdims' is false. Invalid output shape would be produced. input_shape:{0}

1627180283 commented 4 years ago

Can you show me the model?

darrenyao87 commented 4 years ago

@keson1984 The log shows that onnxruntime failed becaus of invalid input_shape。you’d better check your input shape is fixed