Tencent / TNN

TNN: developed by Tencent Youtu Lab and Guangying Lab, a uniform deep learning inference framework for mobile、desktop and server. TNN is distinguished by several outstanding features, including its cross-platform capability, high performance, model compression and code pruning. Based on ncnn and Rapidnet, TNN further strengthens the support and performance optimization for mobile devices, and also draws on the advantages of good extensibility and high performance from existed open source efforts. TNN has been deployed in multiple Apps from Tencent, such as Mobile QQ, Weishi, Pitu, etc. Contributions are welcome to work in collaborative with us and make TNN a better framework.
Other
4.37k stars 765 forks source link

TNN convert & Android inference Error: can't infer resource shape from binary param in benchmark mode, random generator may not be exactly same with the real resource. Segmentation fault. #1949

Open Gabriel819 opened 1 year ago

Gabriel819 commented 1 year ago

1. 环境(environment)

I'm trying to convert a model I made myself to tnn file.

  1. I converted my model to onnx to tnn using convert.py on TNN/tools/convert2tnn and the result is .opt.tnnproto, not .tnnproto. What is the meaning of the opt here? I got 'onnx to tnn convert success' message but is there something wrong?

  2. And after putting this tnn model in benchmark/benchmark-model, I tried running benchmark/benchmark_android/benchmark_models.sh, and I got an error.

2023-08-02 20:27:30 968: E source/tnn/optimizer/graph_matcher/ir.cc:230 Found unknown blob [backbone.blocks.0.norm1.weight] at Node [/backbone/blocks.0/norm1/Add_1] E/tnn: virtual tnn::Status tnn::optimizer::NetOptimizerConvertMatMulToConv::Optimize(tnn::NetStructure *, tnn::NetResource *) [File source/tnn/optimizer/net_optimizer_convert_matmul_to_conv.cc][Line 77] code: 0x1000 msg: source/tnn/optimizer/graph_matcher/ir.cc:230 Found unknown blob [backbone.blocks.0.norm1.weight] at Node [/backbone/blocks.0/norm1/Add_1]E/tnn: virtual tnn::Status tnn::BinaryLayerResourceGenerator::GenLayerResource(tnn::LayerParam *, tnn::LayerResource **, std::vector<Blob *> &) [File source/tnn/interpreter/layer_resource_generator.cc][Line 399] [WARNNING] can't infer resource shape from binary param in benchmark mode, random generator may not be exactly same with the real resource! E/tnn: virtual tnn::Status tnn::BinaryLayerResourceGenerator::GenLayerResource(tnn::LayerParam *, tnn::LayerResource **, std::vector<Blob *> &) [File source/tnn/interpreter/layer_resource_generator.cc][Line 399] [WARNNING] can't infer resource shape from binary param in benchmark mode, random generator may not be exactly same with the real resource! E/tnn: virtual tnn::Status tnn::MatMulLayerResourceGenerator::GenLayerResource(tnn::LayerParam *, tnn::LayerResource **, std::vector<Blob *> &) [File source/tnn/interpreter/layer_resource_generator.cc][Line 544] [WARNNING] can't infer resource shape from MatMul param in benchmark mode, random generator may not be exactly same with the real resource! E/tnn: virtual tnn::Status tnn::BinaryLayerResourceGenerator::GenLayerResource(tnn::LayerParam *, tnn::LayerResource **, std::vector<Blob *> &) [File source/tnn/interpreter/layer_resource_generator.cc][Line 399] [WARNNING] can't infer resource shape from binary param in benchmark mode, random generator may not be exactly same with the real resource! Segmentation fault E/tnn: tnn::Status tnn::OpenCLRuntime::Init() [File source/tnn/device/opencl/opencl_runtime.cc][Line 205] load program cache skipped, ret: 40966, msg: code: 0xA006 msg: open program cache file failed, input path: /data/local/tmp//d1_tnn_ocl_fd8c6f613ff9c0d503dbc462bf21353f_66e6f26f5f12a3349f451b682262ebbb_arm E/tnn: virtual tnn::Status tnn::BinaryLayerResourceGenerator::GenLayerResource(tnn::LayerParam *, tnn::LayerResource **, std::vector<Blob *> &) [File source/tnn/interpreter/layer_resource_generator.cc][Line 399] [WARNNING] can't infer resource shape from binary param in benchmark mode, random generator may not be exactly same with the real resource! E/tnn: virtual tnn::Status tnn::BinaryLayerResourceGenerator::GenLayerResource(tnn::LayerParam *, tnn::LayerResource **, std::vector<Blob *> &) [File source/tnn/interpreter/layer_resource_generator.cc][Line 399] [WARNNING] can't infer resource shape from binary param in benchmark mode, random generator may not be exactly same with the real resource! E/tnn: virtual tnn::Status tnn::MatMulLayerResourceGenerator::GenLayerResource(tnn::LayerParam *, tnn::LayerResource **, std::vector<Blob *> &) [File source/tnn/interpreter/layer_resource_generator.cc][Line 544] [WARNNING] can't infer resource shape from MatMul param in benchmark mode, random generator may not be exactly same with the real resource! E/tnn: virtual tnn::Status tnn::BinaryLayerResourceGenerator::GenLayerResource(tnn::LayerParam *, tnn::LayerResource **, std::vector<Blob *> &) [File source/tnn/interpreter/layer_resource_generator.cc][Line 399] [WARNNING] can't infer resource shape from binary param in benchmark mode, random generator may not be exactly same with the real resource! Segmentation fault

This error keeps saying '[WARNNING] can't infer resource shape from binary param in benchmark mode, random generator may not be exactly same with the real resource!'. The Error report has no clue which part of the model is wrong. What should be done to solve this problem?

wb014 commented 9 months ago

same error have you solved it?

zhuzhu18 commented 4 months ago

我也出现了同样的问题,怎么解决

zhuzhu18 commented 4 months ago

我也出现了同样的问题,怎么解决

我发现是pytorch模型在转换成ONNX时将部分Constant算子折叠了,导致在ONNX转成tnnmodel时常量算子丢失,缺失了部分算子,无法推理