Open ZHIZIHUABU opened 1 year ago
您好,我们已经收到了您的问题,会安排技术人员尽快解答您的问题,请耐心等待。请您再次检查是否提供了清晰的问题描述、复现代码、环境&版本、报错信息等。同时,您也可以通过查看官网文档、常见问题、历史Issue来寻求解答。祝您生活愉快~
Hi! We've received your issue and please be patient to get responded. We will arrange technicians to answer your questions as soon as possible. Please make sure that you have posted enough message to demo your request. You may also check out the API,FAQ and Github Issue to get the answer.Have a nice day!
[I 1/ 1 10:37:22. 29 ...e-Lite/lite/model_parser/model_parser.cc:537 SaveModelNaive] 2. Model is optimized and saved into ppliteseg_qat_self.nb successfully dir: . width: 256 height: 256 mean: 0.500000,0.500000,0.500000 std: 0.500000,0.500000,0.500000 draw_weight: 0.800000 [I 1/ 1 10:37:22.237 ...r/src/driver/verisilicon_timvx/engine.cc:44 Context] properties: [I 1/ 1 10:37:22.237 ...r/src/driver/verisilicon_timvx/engine.cc:56 Context] bn_fusion_max_allowed_quant_scale_deviation: -1 [W 1/ 1 10:37:22.238 ...ter/nnadapter/src/runtime/compilation.cc:334 Finish] Warning: Failed to create a program, No model and cache is provided. [W 1/ 1 10:37:22.238 ...le-Lite/lite/kernels/nnadapter/engine.cc:149 LoadFromCache] Warning: Build model failed(3) ! [W 1/ 1 10:37:22.346 ...nnadapter/nnadapter/src/runtime/model.cc:86 GetSupportedOperations] Warning: Failed to get the supported operations for device 'verisilicon_timvx', because the HAL interface 'validate_program' is not implemented! [W 1/ 1 10:37:22.347 ...kernels/nnadapter/converter/converter.cc:171 Apply] Warning: Failed to get the supported operations for the selected devices, one or more of the selected devices are not supported! [I 1/ 1 10:37:22.347 ...r/src/driver/verisilicon_timvx/driver.cc:70 CreateProgram] Create program for verisilicon_timvx. E [/media/rk_install/Paddle-Lite/build.lite.linux.armv7hf.gcc/third_party/tim-vx/src/tim/vx/internal/src/ops/vsi_nn_op_eltwise.c:op_check_add:576]Inputs/Outputs data type not support: FLOAT32, ASYM UINT8, FLOAT32 E [/media/rk_install/Paddle-Lite/build.lite.linux.armv7hf.gcc/third_party/tim-vx/src/tim/vx/internal/src/vsi_nn_graph.c:setup_node:483]Check node[100] SUBTRACT fail [F 1/ 1 10:37:22.886 ...r/src/driver/verisilicon_timvx/engine.cc:188 Build] Failed to compile tim-vx graph! [F 1/ 1 10:37:22.886 ...r/src/driver/verisilicon_timvx/engine.cc:188 Build] Failed to compile tim-vx graph!
[F 1/ 1 10:37:22.887 ...ter/nnadapter/src/runtime/compilation.cc:98 ~Program] Check failed: device_context: No device found. [F 1/ 1 10:37:22.887 ...ter/nnadapter/src/runtime/compilation.cc:98 ~Program] Check failed: device_context: No device found.
terminate called after throwing an instance of 'nnadapter::logging::Exception' what(): NNAdapter C++ Exception: [F 1/ 1 10:37:22.887 ...ter/nnadapter/src/runtime/compilation.cc:98 ~Program] Check failed: device_context: No device found.
model.zip mobielseg模型和ppliteseg_qat_self都是自己用paddleslim进行量化训练的,但是moibleseg模型在cpu上推理250ms,而npu上则需要800ms,ppliteseg_qat_self在cpu上可以正常运行,但是npu上会报上述错误,pp_liteseg是paddleslim库中下载的demo,可以正常运行,推理时间70ms。以上模型输入图像尺寸为256*256。paddlelite是develop版本,paddleslim是2.4.1版本,paddle-gpu是2.4.2版本。
@ZHIZIHUABU 从报错日志上看是挂在Tim-VX库 elementwise op check,因为Tim-VX elementwise算子是不支持F32 U8的输入的。但为什么会出现这个问题从模型上看不出来,麻烦先export GLOG_v=5再运行, 保存完整的log文件发给我们,谢谢~
您那边可以帮忙分析以下为什么mobileseg模型在npu上推理速度比cpu上更慢吗,npu需要800ms,但是cpu只需要200ms,我用fastdeploy部署也是相同的结果。完整的log文件是在哪个目录下呢,还是需要自己手动保存呢?
log.txt 我现在需要迫切解决这个问题,希望能够您的帮助,mobileseg模型是我们需要部署的模型,希望您能帮忙分析一下,推理时间很慢。
@yingshengBD 关注下这个问题
这问题解决没有啊 百度百度 飞桨飞桨 看看啊
为使您的问题得到快速解决,在建立 Issue 前,请您先通过如下方式搜索是否有相似问题: 历史 issue, FAQ 文档, 官方文档
建立 issue 时,为快速解决问题,请您根据使用情况给出如下信息: