PaddlePaddle / FastDeploy

⚡️An Easy-to-use and Fast Deep Learning Model Deployment Toolkit for ☁️Cloud 📱Mobile and 📹Edge. Including Image, Video, Text and Audio 20+ main stream scenarios and 150+ SOTA models with end-to-end optimization, multi-platform and multi-framework support.
https://www.paddlepaddle.org.cn/fastdeploy
Apache License 2.0
2.92k stars 455 forks source link

在jetsonnano上部属fastdeploy的python案例时,报错!!!! #2219

Open jiangming7301 opened 11 months ago

jiangming7301 commented 11 months ago

温馨提示:根据社区不完全统计,按照模板提问,可以加快回复和解决问题的速度


环境

问题日志及出现问题的操作流程

jiangming7301 commented 11 months ago

2023-10-05 12-37-48屏幕截图 这是我的jetsonnano的配置截图

jiangjiajun commented 11 months ago
jiangming7301 commented 11 months ago

第一个问题要增加这个RuntimeOption.set_trt_cache_file函数吗? 第二个问题:但在编译时,增加

ENABLE_PADDLE_BACKEND & PADDLEINFERENCE_DIRECTORY为可选项

export ENABLE_PADDLE_BACKEND=ON export PADDLEINFERENCE_DIRECTORY=/Download/paddle_inference_jetson ----也会报错,错误信息如下:

dlinano@jetson-nano:~/FastDeploy/python$ export BUILD_ON_JETSON=ON dlinano@jetson-nano:~/FastDeploy/python$ export ENABLE_VISION=ON dlinano@jetson-nano:~/FastDeploy/python$ export ENABLE_PADDLE_BACKEND=ONdlinano@jetson-nano:~/FastDeploy/python$ export PADDLEINFERENCE_DIRECTORY=/paddle_inference_install_dir dlinano@jetson-nano:~/FastDeploy/python$ python setup.py build running build running build_py running create_version running cmake_build Decompress file /home/dlinano/FastDeploy/python/.setuptools-cmake-build/patchelf-0.15.0-aarch64.tar.gz ... -- Use the default onnxruntime lib. The ONNXRuntime path: /home/dlinano/FastDeploy/python/.setuptools-cmake-build/third_libs/install/onnxruntime Cannot compile with onnxruntime-gpu while in linux-aarch64 platform, fallback to onnxruntime-cpu CMake Error at cmake/paddle_inference.cmake:57 (find_package): By not providing "FindPython.cmake" in CMAKE_MODULE_PATH this project has asked CMake to find a package configuration file provided by "Python", but CMake did not find one.

Could not find a package configuration file provided by "Python" with any of the following names:

PythonConfig.cmake
python-config.cmake

Add the installation prefix of "Python" to CMAKE_PREFIX_PATH or set "Python_DIR" to a directory containing one of the above files. If "Python" provides a separate development package or SDK, be sure it has been installed. Call Stack (most recent call first): CMakeLists.txt:245 (include)

-- Configuring incomplete, errors occurred! See also "/home/dlinano/FastDeploy/python/.setuptools-cmake-build/CMakeFiles/CMakeOutput.log". Traceback (most recent call last): File "setup.py", line 465, in license='Apache 2.0') File "/usr/lib/python3/dist-packages/setuptools/init.py", line 129, in setup return distutils.core.setup(**attrs) File "/usr/lib/python3.6/distutils/core.py", line 148, in setup dist.run_commands() File "/usr/lib/python3.6/distutils/dist.py", line 955, in run_commands self.run_command(cmd) File "/usr/lib/python3.6/distutils/dist.py", line 974, in run_command cmd_obj.run() File "/usr/lib/python3.6/distutils/command/build.py", line 135, in run self.run_command(cmd_name) File "/usr/lib/python3.6/distutils/cmd.py", line 313, in run_command self.distribution.run_command(command) File "/usr/lib/python3.6/distutils/dist.py", line 974, in run_command cmd_obj.run() File "setup.py", line 308, in run self.run_command('cmake_build') File "/usr/lib/python3.6/distutils/cmd.py", line 313, in run_command self.distribution.run_command(command) File "/usr/lib/python3.6/distutils/dist.py", line 974, in run_command cmd_obj.run() File "setup.py", line 294, in run subprocess.check_call(cmake_args) File "/usr/lib/python3.6/subprocess.py", line 311, in check_call raise CalledProcessError(retcode, cmd) subprocess.CalledProcessError: Command '['/usr/bin/cmake', '-DPYTHON_INCLUDE_DIR=/usr/include/python3.6m', '-DPYTHON_EXECUTABLE=/usr/bin/python', '-DBUILD_FASTDEPLOY_PYTHON=ON', '-DCMAKE_EXPORT_COMPILE_COMMANDS=ON', '-DONNX_NAMESPACE=paddle2onnx', '-DPY_EXT_SUFFIX=.cpython-36m-aarch64-linux-gnu.so', '-DCMAKE_BUILD_TYPE=Release', '-DLIBRARY_NAME=fastdeploy', '-DPY_LIBRARY_NAME=fastdeploy_main', '-DENABLE_TVM_BACKEND=OFF', '-DENABLE_RKNPU2_BACKEND=OFF', '-DENABLE_SOPHGO_BACKEND=OFF', '-DENABLE_ORT_BACKEND=OFF', '-DENABLE_OPENVINO_BACKEND=OFF', '-DENABLE_PADDLE_BACKEND=ON', '-DENABLE_POROS_BACKEND=OFF', '-DENABLE_TRT_BACKEND=OFF', '-DENABLE_LITE_BACKEND=OFF', '-DENABLE_VISION=ON', '-DENABLE_ENCRYPTION=OFF', '-DENABLE_FLYCV=OFF', '-DENABLE_CVCUDA=OFF', '-DENABLE_TEXT=OFF', '-DENABLE_BENCHMARK=OFF', '-DWITH_GPU=OFF', '-DWITH_IPU=OFF', '-DWITH_OPENCL=OFF', '-DWITH_TIMVX=OFF', '-DWITH_DIRECTML=OFF', '-DWITH_ASCEND=OFF', '-DWITH_KUNLUNXIN=OFF', '-DRKNN2_TARGET_SOC=', '-DTRT_DIRECTORY=UNDEFINED', '-DCUDA_DIRECTORY=/usr/local/cuda', '-DOPENCV_DIRECTORY=', '-DORT_DIRECTORY=', '-DPADDLEINFERENCE_DIRECTORY=/paddle_inference_install_dir', '-DPADDLEINFERENCE_VERSION=', '-DPADDLEINFERENCE_URL=', '-DPADDLEINFERENCE_API_COMPAT_2_4_x=OFF', '-DPADDLEINFERENCE_API_COMPAT_2_5_x=OFF', '-DPADDLEINFERENCE_API_COMPAT_DEV=OFF', '-DPADDLEINFERENCE_API_CUSTOM_OP=OFF', '-DPADDLE2ONNX_URL=', '-DPADDLELITE_URL=', '-DBUILD_ON_JETSON=ON', '-DBUILD_PADDLE2ONNX=OFF', '/home/dlinano/FastDeploy']' returned non-zero exit status 1.

jiangming7301 commented 11 months ago

下载[Paddle Inference的C++在/home/dlinano/paddle_inference_install_dir下

jiangming7301 commented 11 months ago

已经创建model.trt文件,运行python infer_ppyoloe.py --model_dir ppyoloe_crn_l_300e_coco --image 000000014439.jpg --device gpu --use_trt True 还是报错,信息如下: WARNING:root:RuntimeOption.set_trt_cache_file will be deprecated in v1.2.0, please use RuntimeOption.trt_option.serialize_file = ./tensorrt_cache/model.trt instead. WARNING:root:RuntimeOption.set_trt_input_shape will be deprecated in v1.2.0, please use RuntimeOption.trt_option.set_shape() instead. WARNING:root:RuntimeOption.set_trt_input_shape will be deprecated in v1.2.0, please use RuntimeOption.trt_option.set_shape() instead. [INFO] fastdeploy/vision/common/processors/transform.cc(45)::FuseNormalizeCast Normalize and Cast are fused to Normalize in preprocessing pipeline. [INFO] fastdeploy/vision/common/processors/transform.cc(93)::FuseNormalizeHWC2CHW Normalize and HWC2CHW are fused to NormalizeAndPermute in preprocessing pipeline. [INFO] fastdeploy/vision/common/processors/transform.cc(159)::FuseNormalizeColorConvert BGR2RGB and NormalizeAndPermute are fused to NormalizeAndPermute with swap_rb=1 [WARN][Paddle2ONNX] [multiclass_nms3: multiclass_nms3_0.tmp_1] Paramter nms_top_k:10000 is exceed limit in TensorRT BatchedNMS plugin, will force to 4096. [INFO] fastdeploy/runtime/backends/tensorrt/trt_backend.cc(719)::CreateTrtEngineFromOnnx Detect serialized TensorRT Engine file in ./tensorrt_cache/model.trt, will load it directly. Traceback (most recent call last): File "infer_ppyoloe.py", line 64, in model_file, params_file, config_file, runtime_option=runtime_option, model_format=ModelFormat.PADDLE) File "/usr/local/lib/python3.6/dist-packages/fastdeploy/vision/detection/ppdet/init.py", line 115, in init model_format) IndexError: basic_string::at: __n (which is 0) >= this->size() (which is 0)

jiangjiajun commented 11 months ago

已经创建model.trt文件 你是手动创建了这个文件吗,不需要手动创建,而是创建好目录,也就是tensorrt_cache这个目录即可,手动创建的Model.trt文件是加载不了的

jiangming7301 commented 11 months ago

我注释了这段代码(option.set_trt_cache_file("./tensorrt_cache/model.trt"))应该跟他没关系,也会报这个错误: [WARN][Paddle2ONNX] [multiclass_nms3: multiclass_nms3_0.tmp_1] Paramter nms_top_k:10000 is exceed limit in TensorRT BatchedNMS plugin, will force to 4096. [INFO] fastdeploy/runtime/backends/tensorrt/trt_backend.cc(719)::CreateTrtEngineFromOnnx Detect serialized TensorRT Engine file in ./tensorrt_cache/model.trt, will load it directly. Traceback (most recent call last): File "infer_ppyoloe.py", line 64, in model_file, params_file, config_file, runtime_option=runtime_option, model_format=ModelFormat.PADDLE) File "/usr/local/lib/python3.6/dist-packages/fastdeploy/vision/detection/ppdet/init.py", line 115, in init model_format) IndexError: basic_string::at: __n (which is 0) >= this->size() (which is 0) 还有就是第二个问题,帮忙看一下?

jiangming7301 commented 11 months ago

删除model.trt文件,报错误信息如下: [ERROR] fastdeploy/runtime/backends/tensorrt/trt_backend.cc(239)::log 2: [pluginV2DynamicExtRunner.cpp::execute::115] Error Code 2: Internal Error (Assertion status == kSTATUS_SUCCESS failed. ) [ERROR] fastdeploy/runtime/backends/tensorrt/trt_backend.cc(348)::Infer Failed to Infer with TensorRT. [ERROR] fastdeploy/vision/detection/ppdet/base.cc(73)::BatchPredict Failed to inference by runtime. DetectionResult: [xmin, ymin, xmax, ymax, score, label_id]

Visualized result save in ./visualized_result.jpg 运行完后系统自动创建了model.trt文件,但预测报上面的错误

jiangjiajun commented 11 months ago

估计是这个Jetpack自带的TensorRT版本较低原因,直接使用原生TensorRT会有这个问题。 建议你重新编译,集成Paddle Inference,使用Paddle Inference内置的TensorRT来加速

jiangming7301 commented 11 months ago

但使用ENABLE_PADDLE_BACKEND=ON编译,增加以下

ENABLE_PADDLE_BACKEND & PADDLEINFERENCE_DIRECTORY为可选项 export ENABLE_PADDLE_BACKEND=ON export PADDLEINFERENCE_DIRECTORY=/Download/paddle_inference_jetson ----也会报错,错误信息如下:

dlinano@jetson-nano:/FastDeploy/python$ export BUILD_ON_JETSON=ON dlinano@jetson-nano:/FastDeploy/python$ export ENABLE_VISION=ON dlinano@jetson-nano:/FastDeploy/python$ export ENABLE_PADDLE_BACKEND=ONdlinano@jetson-nano:/FastDeploy/python$ export PADDLEINFERENCE_DIRECTORY=/paddle_inference_install_dir dlinano@jetson-nano:~/FastDeploy/python$ python setup.py build running build running build_py running create_version running cmake_build Decompress file /home/dlinano/FastDeploy/python/.setuptools-cmake-build/patchelf-0.15.0-aarch64.tar.gz ... -- Use the default onnxruntime lib. The ONNXRuntime path: /home/dlinano/FastDeploy/python/.setuptools-cmake-build/third_libs/install/onnxruntime Cannot compile with onnxruntime-gpu while in linux-aarch64 platform, fallback to onnxruntime-cpu CMake Error at cmake/paddle_inference.cmake:57 (find_package): By not providing "FindPython.cmake" in CMAKE_MODULE_PATH this project has asked CMake to find a package configuration file provided by "Python", but CMake did not find one.

Could not find a package configuration file provided by "Python" with any of the following names:

PythonConfig.cmake python-config.cmake Add the installation prefix of "Python" to CMAKE_PREFIX_PATH or set "Python_DIR" to a directory containing one of the above files. If "Python" provides a separate development package or SDK, be sure it has been installed. Call Stack (most recent call first): CMakeLists.txt:245 (include)

-- Configuring incomplete, errors occurred! See also "/home/dlinano/FastDeploy/python/.setuptools-cmake-build/CMakeFiles/CMakeOutput.log". Traceback (most recent call last): File "setup.py", line 465, in license='Apache 2.0') File "/usr/lib/python3/dist-packages/setuptools/init.py", line 129, in setup return distutils.core.setup(**attrs) File "/usr/lib/python3.6/distutils/core.py", line 148, in setup dist.run_commands() File "/usr/lib/python3.6/distutils/dist.py", line 955, in run_commands self.run_command(cmd) File "/usr/lib/python3.6/distutils/dist.py", line 974, in run_command cmd_obj.run() File "/usr/lib/python3.6/distutils/command/build.py", line 135, in run self.run_command(cmd_name) File "/usr/lib/python3.6/distutils/cmd.py", line 313, in run_command self.distribution.run_command(command) File "/usr/lib/python3.6/distutils/dist.py", line 974, in run_command cmd_obj.run() File "setup.py", line 308, in run self.run_command('cmake_build') File "/usr/lib/python3.6/distutils/cmd.py", line 313, in run_command self.distribution.run_command(command) File "/usr/lib/python3.6/distutils/dist.py", line 974, in run_command cmd_obj.run() File "setup.py", line 294, in run subprocess.check_call(cmake_args) File "/usr/lib/python3.6/subprocess.py", line 311, in check_call raise CalledProcessError(retcode, cmd) subprocess.CalledProcessError: Command '['/usr/bin/cmake', '-DPYTHON_INCLUDE_DIR=/usr/include/python3.6m', '-DPYTHON_EXECUTABLE=/usr/bin/python', '-DBUILD_FASTDEPLOY_PYTHON=ON', '-DCMAKE_EXPORT_COMPILE_COMMANDS=ON', '-DONNX_NAMESPACE=paddle2onnx', '-DPY_EXT_SUFFIX=.cpython-36m-aarch64-linux-gnu.so', '-DCMAKE_BUILD_TYPE=Release', '-DLIBRARY_NAME=fastdeploy', '-DPY_LIBRARY_NAME=fastdeploy_main', '-DENABLE_TVM_BACKEND=OFF', '-DENABLE_RKNPU2_BACKEND=OFF', '-DENABLE_SOPHGO_BACKEND=OFF', '-DENABLE_ORT_BACKEND=OFF', '-DENABLE_OPENVINO_BACKEND=OFF', '-DENABLE_PADDLE_BACKEND=ON', '-DENABLE_POROS_BACKEND=OFF', '-DENABLE_TRT_BACKEND=OFF', '-DENABLE_LITE_BACKEND=OFF', '-DENABLE_VISION=ON', '-DENABLE_ENCRYPTION=OFF', '-DENABLE_FLYCV=OFF', '-DENABLE_CVCUDA=OFF', '-DENABLE_TEXT=OFF', '-DENABLE_BENCHMARK=OFF', '-DWITH_GPU=OFF', '-DWITH_IPU=OFF', '-DWITH_OPENCL=OFF', '-DWITH_TIMVX=OFF', '-DWITH_DIRECTML=OFF', '-DWITH_ASCEND=OFF', '-DWITH_KUNLUNXIN=OFF', '-DRKNN2_TARGET_SOC=', '-DTRT_DIRECTORY=UNDEFINED', '-DCUDA_DIRECTORY=/usr/local/cuda', '-DOPENCV_DIRECTORY=', '-DORT_DIRECTORY=', '-DPADDLEINFERENCE_DIRECTORY=/paddle_inference_install_dir', '-DPADDLEINFERENCE_VERSION=', '-DPADDLEINFERENCE_URL=', '-DPADDLEINFERENCE_API_COMPAT_2_4_x=OFF', '-DPADDLEINFERENCE_API_COMPAT_2_5_x=OFF', '-DPADDLEINFERENCE_API_COMPAT_DEV=OFF', '-DPADDLEINFERENCE_API_CUSTOM_OP=OFF', '-DPADDLE2ONNX_URL=', '-DPADDLELITE_URL=', '-DBUILD_ON_JETSON=ON', '-DBUILD_PADDLE2ONNX=OFF', '/home/dlinano/FastDeploy']' returned non-zero exit status 1.

jiangjiajun commented 11 months ago

这个是因为编译时没找到python。 有以下两种方式可以尝试

jiangming7301 commented 11 months ago

能不能给点详细的方式

jiangjiajun commented 11 months ago

这个是因为编译时没找到python。 有以下两种方式可以尝试

  • 升级cmake至最新版本,删除编译缓存重新编译
  • 安装miniconda python环境,删除编译缓存重新编译

这两种方式现在有什么问题么?

jiangming7301 commented 11 months ago

cmake升级后,还是报错,信息如下: dlinano@jetson-nano:~/FastDeploy/python$ python setup.py build running build running build_py running create_version running cmake_build CMake Warning (dev) at CMakeLists.txt:15 (PROJECT): Policy CMP0048 is not set: project() command manages VERSION variables. Run "cmake --help-policy CMP0048" for policy details. Use the cmake_policy command to set the policy and suppress this warning.

The following variable(s) would be set to empty:

CMAKE_PROJECT_VERSION
CMAKE_PROJECT_VERSION_MAJOR
CMAKE_PROJECT_VERSION_MINOR
CMAKE_PROJECT_VERSION_PATCH

This warning is for project developers. Use -Wno-dev to suppress it.

-- The C compiler identification is GNU 7.5.0 -- The CXX compiler identification is GNU 7.5.0 -- Detecting C compiler ABI info -- Detecting C compiler ABI info - done -- Check for working C compiler: /usr/bin/cc - skipped -- Detecting C compile features -- Detecting C compile features - done -- Detecting CXX compiler ABI info -- Detecting CXX compiler ABI info - done -- Check for working CXX compiler: /usr/bin/c++ - skipped -- Detecting CXX compile features -- Detecting CXX compile features - done Decompress file /home/dlinano/FastDeploy/python/.setuptools-cmake-build/patchelf-0.15.0-aarch64.tar.gz ... -- Use the default onnxruntime lib. The ONNXRuntime path: /home/dlinano/FastDeploy/python/.setuptools-cmake-build/third_libs/install/onnxruntime Cannot compile with onnxruntime-gpu while in linux-aarch64 platform, fallback to onnxruntime-cpu -- Found Python: /usr/bin/python3.6 (found version "3.6.9") found components: Interpreter Development Development.Module Development.Embed -- Copying /home/dlinano/paddle_inference_install_dir to /home/dlinano/FastDeploy/python/.setuptools-cmake-build/third_libs/install/paddle_inference ... CMake Error at cmake/paddle_inference.cmake:272 (string): string sub-command REGEX, mode MATCH needs at least 5 arguments total to command. Call Stack (most recent call first): CMakeLists.txt:245 (include)

CMake Error at cmake/paddle_inference.cmake:273 (string): string sub-command REGEX, mode MATCH needs at least 5 arguments total to command. Call Stack (most recent call first): CMakeLists.txt:245 (include)

CMake Error at cmake/paddle_inference.cmake:274 (string): string sub-command REGEX, mode MATCH needs at least 5 arguments total to command. Call Stack (most recent call first): CMakeLists.txt:245 (include)

-- The CUDA compiler identification is NVIDIA 10.2.300 -- Detecting CUDA compiler ABI info -- Detecting CUDA compiler ABI info - done -- Check for working CUDA compiler: /usr/local/cuda-10.2/bin/nvcc - skipped -- Detecting CUDA compile features -- Detecting CUDA compile features - done -- CUDA compiler: /usr/local/cuda-10.2/bin/nvcc, version: NVIDIA 10.2.300 -- CUDA detected: 10.2.300 -- NVCC_FLAGS_EXTRA: -gencode arch=compute_53,code=sm_53 -gencode arch=compute_62,code=sm_62 -gencode arch=compute_72,code=sm_72 -- Use the opencv lib specified by user. The OpenCV path: /usr/lib/aarch64-linux-gnu/cmake/opencv4/ -- -- *****FastDeploy Building Summary** -- CMake version : 3.22.1 -- CMake command : /usr/local/bin/cmake -- System : Linux -- C++ compiler : /usr/bin/c++ -- C++ standard : 11 -- C++ cuda standard : 11 -- C++ compiler version : 7.5.0 -- CXX flags : -Wno-format -g0 -O3 -- EXE linker flags : -- Shared linker flags : -- Build type : Release -- Compile definitions : _GLIBCXX_USE_CXX11_ABI=1;FASTDEPLOY_LIB;CMAKE_BUILD_TYPE=Release;ENABLE_ORT_BACKEND;ENABLE_PADDLE_BACKEND;WITH_GPU;ENABLE_TRT_BACKEND;ENABLE_VISION;ENABLE_PADDLE2ONNX -- CMAKE_PREFIX_PATH : -- CMAKE_INSTALL_PREFIX : /usr/local -- CMAKE_MODULE_PATH : -- -- FastDeploy version : 0.0.0 -- ENABLE_ORT_BACKEND : ON -- ENABLE_RKNPU2_BACKEND : OFF -- ENABLE_HORIZON_BACKEND : OFF -- ENABLE_SOPHGO_BACKEND : OFF -- ENABLE_PADDLE_BACKEND : ON -- ENABLE_LITE_BACKEND : OFF -- ENABLE_POROS_BACKEND : OFF -- ENABLE_TRT_BACKEND : ON -- ENABLE_OPENVINO_BACKEND : OFF -- ENABLE_TVM_BACKEND : OFF -- ENABLE_BENCHMARK : OFF -- ENABLE_VISION : ON -- ENABLE_TEXT : OFF -- ENABLE_ENCRYPTION : OFF -- ENABLE_FLYCV : OFF -- ENABLE_CVCUDA : OFF -- WITH_GPU : ON -- WITH_IPU : OFF -- WITH_OPENCL : OFF -- WITH_TESTING : OFF -- WITH_ASCEND : OFF -- WITH_DIRECTML : OFF -- WITH_TIMVX : OFF -- WITH_KUNLUNXIN : OFF -- WITH_CAPI : OFF -- WITH_CSHARPAPI : OFF -- ONNXRuntime version : 1.12.0 -- Paddle Inference version : -- PADDLE_WITH_ENCRYPT : OFF -- PADDLE_WITH_AUTH : OFF -- CUDA_DIRECTORY : /usr/local/cuda -- TRT_DRECTORY : /home/dlinano/FastDeploy/python/.setuptools-cmake-build/UNDEFINED -- Python executable : /usr/bin/python -- Python includes : /usr/include/python3.6m -- Configuring incomplete, errors occurred! See also "/home/dlinano/FastDeploy/python/.setuptools-cmake-build/CMakeFiles/CMakeOutput.log". Traceback (most recent call last): File "setup.py", line 465, in license='Apache 2.0') File "/usr/lib/python3/dist-packages/setuptools/init.py", line 129, in setup return distutils.core.setup(**attrs) File "/usr/lib/python3.6/distutils/core.py", line 148, in setup dist.run_commands() File "/usr/lib/python3.6/distutils/dist.py", line 955, in run_commands self.run_command(cmd) File "/usr/lib/python3.6/distutils/dist.py", line 974, in run_command cmd_obj.run() File "/usr/lib/python3.6/distutils/command/build.py", line 135, in run self.run_command(cmd_name) File "/usr/lib/python3.6/distutils/cmd.py", line 313, in run_command self.distribution.run_command(command) File "/usr/lib/python3.6/distutils/dist.py", line 974, in run_command cmd_obj.run() File "setup.py", line 308, in run self.run_command('cmake_build') File "/usr/lib/python3.6/distutils/cmd.py", line 313, in run_command self.distribution.run_command(command) File "/usr/lib/python3.6/distutils/dist.py", line 974, in run_command cmd_obj.run() File "setup.py", line 294, in run subprocess.check_call(cmake_args) File "/usr/lib/python3.6/subprocess.py", line 311, in check_call raise CalledProcessError(retcode, cmd) subprocess.CalledProcessError: Command '['/usr/local/bin/cmake', '-DPYTHON_INCLUDE_DIR=/usr/include/python3.6m', '-DPYTHON_EXECUTABLE=/usr/bin/python', '-DBUILD_FASTDEPLOY_PYTHON=ON', '-DCMAKE_EXPORT_COMPILE_COMMANDS=ON', '-DONNX_NAMESPACE=paddle2onnx', '-DPY_EXT_SUFFIX=.cpython-36m-aarch64-linux-gnu.so', '-DCMAKE_BUILD_TYPE=Release', '-DLIBRARY_NAME=fastdeploy', '-DPY_LIBRARY_NAME=fastdeploy_main', '-DENABLE_TVM_BACKEND=OFF', '-DENABLE_RKNPU2_BACKEND=OFF', '-DENABLE_SOPHGO_BACKEND=OFF', '-DENABLE_ORT_BACKEND=OFF', '-DENABLE_OPENVINO_BACKEND=OFF', '-DENABLE_PADDLE_BACKEND=ON', '-DENABLE_POROS_BACKEND=OFF', '-DENABLE_TRT_BACKEND=OFF', '-DENABLE_LITE_BACKEND=OFF', '-DENABLE_VISION=ON', '-DENABLE_ENCRYPTION=OFF', '-DENABLE_FLYCV=OFF', '-DENABLE_CVCUDA=OFF', '-DENABLE_TEXT=OFF', '-DENABLE_BENCHMARK=OFF', '-DWITH_GPU=OFF', '-DWITH_IPU=OFF', '-DWITH_OPENCL=OFF', '-DWITH_TIMVX=OFF', '-DWITH_DIRECTML=OFF', '-DWITH_ASCEND=OFF', '-DWITH_KUNLUNXIN=OFF', '-DRKNN2_TARGET_SOC=', '-DTRT_DIRECTORY=UNDEFINED', '-DCUDA_DIRECTORY=/usr/local/cuda', '-DOPENCV_DIRECTORY=', '-DORT_DIRECTORY=', '-DPADDLEINFERENCE_DIRECTORY=/home/dlinano/paddle_inference_install_dir', '-DPADDLEINFERENCE_VERSION=', '-DPADDLEINFERENCE_URL=', '-DPADDLEINFERENCE_API_COMPAT_2_4_x=OFF', '-DPADDLEINFERENCE_API_COMPAT_2_5_x=OFF', '-DPADDLEINFERENCE_API_COMPAT_DEV=OFF', '-DPADDLEINFERENCE_API_CUSTOM_OP=OFF', '-DPADDLE2ONNX_URL=', '-DPADDLELITE_URL=', '-DBUILD_ON_JETSON=ON', '-DBUILD_PADDLE2ONNX=OFF', '/home/dlinano/FastDeploy']' returned non-zero exit status 1.

jiangjiajun commented 11 months ago

https://github.com/PaddlePaddle/FastDeploy/blob/develop/cmake/paddle_inference.cmake#L271 在270行加入

set(PADDLEINFERENCE_VERSION "0.0.0")

试下

jiangming7301 commented 11 months ago

在270行加了,报其他错误,信息如下: dlinano@jetson-nano:~/FastDeploy/python$ python setup.py build running build running build_py running create_version running cmake_build CMake Warning (dev) at CMakeLists.txt:15 (PROJECT): Policy CMP0048 is not set: project() command manages VERSION variables. Run "cmake --help-policy CMP0048" for policy details. Use the cmake_policy command to set the policy and suppress this warning.

The following variable(s) would be set to empty:

CMAKE_PROJECT_VERSION
CMAKE_PROJECT_VERSION_MAJOR
CMAKE_PROJECT_VERSION_MINOR
CMAKE_PROJECT_VERSION_PATCH

This warning is for project developers. Use -Wno-dev to suppress it.

Decompress file /home/dlinano/FastDeploy/python/.setuptools-cmake-build/patchelf-0.15.0-aarch64.tar.gz ... -- Use the default onnxruntime lib. The ONNXRuntime path: /home/dlinano/FastDeploy/python/.setuptools-cmake-build/third_libs/install/onnxruntime Cannot compile with onnxruntime-gpu while in linux-aarch64 platform, fallback to onnxruntime-cpu -- Copying /home/dlinano/paddle_inference_install_dir to /home/dlinano/FastDeploy/python/.setuptools-cmake-build/third_libs/install/paddle_inference ... -- CUDA compiler: /usr/local/cuda-10.2/bin/nvcc, version: NVIDIA 10.2.300 -- CUDA detected: 10.2.300 -- NVCC_FLAGS_EXTRA: -gencode arch=compute_53,code=sm_53 -gencode arch=compute_62,code=sm_62 -gencode arch=compute_72,code=sm_72 -- Use the opencv lib specified by user. The OpenCV path: /usr/lib/aarch64-linux-gnu/cmake/opencv4/ -- -- ***FastDeploy Building Summary** -- CMake version : 3.22.1 -- CMake command : /usr/local/bin/cmake -- System : Linux -- C++ compiler : /usr/bin/c++ -- C++ standard : 11 -- C++ cuda standard : 11 -- C++ compiler version : 7.5.0 -- CXX flags : -Wno-format -g0 -O3 -- EXE linker flags : -- Shared linker flags : -- Build type : Release -- Compile definitions : _GLIBCXX_USE_CXX11_ABI=1;FASTDEPLOY_LIB;CMAKE_BUILD_TYPE=Release;ENABLE_ORT_BACKEND;ENABLE_PADDLE_BACKEND;PADDLEINFERENCE_API_COMPAT_DEV;WITH_GPU;ENABLE_TRT_BACKEND;ENABLE_VISION;ENABLE_PADDLE2ONNX -- CMAKE_PREFIX_PATH : -- CMAKE_INSTALL_PREFIX : /usr/local -- CMAKE_MODULE_PATH : -- -- FastDeploy version : 0.0.0 -- ENABLE_ORT_BACKEND : ON -- ENABLE_RKNPU2_BACKEND : OFF -- ENABLE_HORIZON_BACKEND : OFF -- ENABLE_SOPHGO_BACKEND : OFF -- ENABLE_PADDLE_BACKEND : ON -- ENABLE_LITE_BACKEND : OFF -- ENABLE_POROS_BACKEND : OFF -- ENABLE_TRT_BACKEND : ON -- ENABLE_OPENVINO_BACKEND : OFF -- ENABLE_TVM_BACKEND : OFF -- ENABLE_BENCHMARK : OFF -- ENABLE_VISION : ON -- ENABLE_TEXT : OFF -- ENABLE_ENCRYPTION : OFF -- ENABLE_FLYCV : OFF -- ENABLE_CVCUDA : OFF -- WITH_GPU : ON -- WITH_IPU : OFF -- WITH_OPENCL : OFF -- WITH_TESTING : OFF -- WITH_ASCEND : OFF -- WITH_DIRECTML : OFF -- WITH_TIMVX : OFF -- WITH_KUNLUNXIN : OFF -- WITH_CAPI : OFF -- WITH_CSHARPAPI : OFF -- ONNXRuntime version : 1.12.0 -- Paddle Inference version : 0.0.0 -- PADDLE_WITH_ENCRYPT : OFF -- PADDLE_WITH_AUTH : OFF -- CUDA_DIRECTORY : /usr/local/cuda -- TRT_DRECTORY : /home/dlinano/FastDeploy/python/.setuptools-cmake-build/UNDEFINED -- Python executable : /usr/bin/python -- Python includes : /usr/include/python3.6m -- Configuring done -- Generating done -- Build files have been written to: /home/dlinano/FastDeploy/python/.setuptools-cmake-build [ 0%] Creating directories for 'extern_onnxruntime' [ 0%] Creating directories for 'extern_paddle2onnx' Consolidate compiler generated dependencies of target yaml-cpp [ 0%] Performing download step (download, verify and extract) for 'extern_paddle2onnx' [ 1%] Performing download step (download, verify and extract) for 'extern_onnxruntime' -- File already exists but no hash specified (use URL_HASH): file='/home/dlinano/FastDeploy/python/.setuptools-cmake-build/third_libs/paddle2onnx/src/paddle2onnx-linux-aarch64-1.0.8rc.tgz' Old file will be removed and new file downloaded from URL. -- Downloading... dst='/home/dlinano/FastDeploy/python/.setuptools-cmake-build/third_libs/paddle2onnx/src/paddle2onnx-linux-aarch64-1.0.8rc.tgz' timeout='none' inactivity timeout='none' -- Using src='https://bj.bcebos.com/fastdeploy/third_libs/paddle2onnx-linux-aarch64-1.0.8rc.tgz' -- File already exists but no hash specified (use URL_HASH): file='/home/dlinano/FastDeploy/python/.setuptools-cmake-build/third_libs/onnxruntime/src/onnxruntime-linux-aarch64-1.12.0.tgz' Old file will be removed and new file downloaded from URL. -- Downloading... dst='/home/dlinano/FastDeploy/python/.setuptools-cmake-build/third_libs/onnxruntime/src/onnxruntime-linux-aarch64-1.12.0.tgz' timeout='none' inactivity timeout='none' -- Using src='https://bj.bcebos.com/paddle2onnx/libs/onnxruntime-linux-aarch64-1.12.0.tgz' [ 2%] Building CXX object third_party/yaml-cpp/CMakeFiles/yaml-cpp.dir/src/contrib/graphbuilderadapter.cpp.o [ 2%] Building CXX object third_party/yaml-cpp/CMakeFiles/yaml-cpp.dir/src/contrib/graphbuilder.cpp.o -- Downloading... done -- extracting... src='/home/dlinano/FastDeploy/python/.setuptools-cmake-build/third_libs/paddle2onnx/src/paddle2onnx-linux-aarch64-1.0.8rc.tgz' dst='/home/dlinano/FastDeploy/python/.setuptools-cmake-build/third_libs/paddle2onnx/src/extern_paddle2onnx' -- extracting... [tar xfz] -- extracting... [analysis] -- extracting... [rename] -- extracting... [clean up] -- extracting... done [ 2%] No update step for 'extern_paddle2onnx' [ 3%] No patch step for 'extern_paddle2onnx' [ 4%] No configure step for 'extern_paddle2onnx' [ 4%] No build step for 'extern_paddle2onnx' [ 4%] Performing install step for 'extern_paddle2onnx' [ 4%] Completed 'extern_paddle2onnx' [ 4%] Built target extern_paddle2onnx [ 4%] Building CXX object third_party/yaml-cpp/CMakeFiles/yaml-cpp.dir/src/binary.cpp.o -- Downloading... done -- extracting... src='/home/dlinano/FastDeploy/python/.setuptools-cmake-build/third_libs/onnxruntime/src/onnxruntime-linux-aarch64-1.12.0.tgz' dst='/home/dlinano/FastDeploy/python/.setuptools-cmake-build/third_libs/onnxruntime/src/extern_onnxruntime' -- extracting... [tar xfz] [ 4%] Building CXX object third_party/yaml-cpp/CMakeFiles/yaml-cpp.dir/src/convert.cpp.o [ 4%] Building CXX object third_party/yaml-cpp/CMakeFiles/yaml-cpp.dir/src/depthguard.cpp.o -- extracting... [analysis] -- extracting... [rename] -- extracting... [clean up] -- extracting... done [ 5%] No update step for 'extern_onnxruntime' [ 5%] No patch step for 'extern_onnxruntime' [ 5%] No configure step for 'extern_onnxruntime' [ 5%] No build step for 'extern_onnxruntime' [ 5%] Performing install step for 'extern_onnxruntime' [ 5%] Completed 'extern_onnxruntime' [ 5%] Built target extern_onnxruntime [ 6%] Building CXX object third_party/yaml-cpp/CMakeFiles/yaml-cpp.dir/src/directives.cpp.o [ 6%] Building CXX object third_party/yaml-cpp/CMakeFiles/yaml-cpp.dir/src/emit.cpp.o [ 6%] Building CXX object third_party/yaml-cpp/CMakeFiles/yaml-cpp.dir/src/emitfromevents.cpp.o [ 7%] Building CXX object third_party/yaml-cpp/CMakeFiles/yaml-cpp.dir/src/emitter.cpp.o [ 7%] Building CXX object third_party/yaml-cpp/CMakeFiles/yaml-cpp.dir/src/emitterstate.cpp.o [ 7%] Building CXX object third_party/yaml-cpp/CMakeFiles/yaml-cpp.dir/src/emitterutils.cpp.o [ 7%] Building CXX object third_party/yaml-cpp/CMakeFiles/yaml-cpp.dir/src/exceptions.cpp.o [ 8%] Building CXX object third_party/yaml-cpp/CMakeFiles/yaml-cpp.dir/src/exp.cpp.o [ 8%] Building CXX object third_party/yaml-cpp/CMakeFiles/yaml-cpp.dir/src/memory.cpp.o [ 8%] Building CXX object third_party/yaml-cpp/CMakeFiles/yaml-cpp.dir/src/node.cpp.o [ 8%] Building CXX object third_party/yaml-cpp/CMakeFiles/yaml-cpp.dir/src/node_data.cpp.o [ 9%] Building CXX object third_party/yaml-cpp/CMakeFiles/yaml-cpp.dir/src/nodebuilder.cpp.o [ 9%] Building CXX object third_party/yaml-cpp/CMakeFiles/yaml-cpp.dir/src/nodeevents.cpp.o [ 9%] Building CXX object third_party/yaml-cpp/CMakeFiles/yaml-cpp.dir/src/null.cpp.o [ 10%] Building CXX object third_party/yaml-cpp/CMakeFiles/yaml-cpp.dir/src/ostream_wrapper.cpp.o [ 10%] Building CXX object third_party/yaml-cpp/CMakeFiles/yaml-cpp.dir/src/parse.cpp.o [ 10%] Building CXX object third_party/yaml-cpp/CMakeFiles/yaml-cpp.dir/src/parser.cpp.o [ 10%] Building CXX object third_party/yaml-cpp/CMakeFiles/yaml-cpp.dir/src/regex_yaml.cpp.o [ 11%] Building CXX object third_party/yaml-cpp/CMakeFiles/yaml-cpp.dir/src/scanner.cpp.o [ 11%] Building CXX object third_party/yaml-cpp/CMakeFiles/yaml-cpp.dir/src/scanscalar.cpp.o [ 11%] Building CXX object third_party/yaml-cpp/CMakeFiles/yaml-cpp.dir/src/scantag.cpp.o [ 11%] Building CXX object third_party/yaml-cpp/CMakeFiles/yaml-cpp.dir/src/scantoken.cpp.o [ 12%] Building CXX object third_party/yaml-cpp/CMakeFiles/yaml-cpp.dir/src/simplekey.cpp.o [ 12%] Building CXX object third_party/yaml-cpp/CMakeFiles/yaml-cpp.dir/src/singledocparser.cpp.o [ 12%] Building CXX object third_party/yaml-cpp/CMakeFiles/yaml-cpp.dir/src/stream.cpp.o [ 13%] Building CXX object third_party/yaml-cpp/CMakeFiles/yaml-cpp.dir/src/tag.cpp.o [ 13%] Linking CXX static library libyaml-cpp.a [ 13%] Built target yaml-cpp Consolidate compiler generated dependencies of target yaml-cpp-sandbox Consolidate compiler generated dependencies of target yaml-cpp-read Consolidate compiler generated dependencies of target yaml-cpp-parse [ 14%] Building CXX object third_party/yaml-cpp/util/CMakeFiles/yaml-cpp-read.dir/read.cpp.o [ 14%] Building CXX object third_party/yaml-cpp/util/CMakeFiles/yaml-cpp-parse.dir/parse.cpp.o [ 14%] Building CXX object third_party/yaml-cpp/util/CMakeFiles/yaml-cpp-sandbox.dir/sandbox.cpp.o Consolidate compiler generated dependencies of target fastdeploy [ 14%] Building CXX object CMakeFiles/fastdeploy.dir/fastdeploy/benchmark/utils.cc.o [ 14%] Linking CXX executable parse [ 14%] Linking CXX executable read [ 14%] Built target yaml-cpp-parse [ 14%] Building CXX object CMakeFiles/fastdeploy.dir/fastdeploy/core/allocate.cc.o [ 14%] Built target yaml-cpp-read [ 15%] Building CXX object CMakeFiles/fastdeploy.dir/fastdeploy/core/fd_tensor.cc.o [ 15%] Building CXX object CMakeFiles/fastdeploy.dir/fastdeploy/core/fd_type.cc.o [ 16%] Linking CXX executable sandbox [ 16%] Built target yaml-cpp-sandbox [ 16%] Building CXX object CMakeFiles/fastdeploy.dir/fastdeploy/fastdeploy_model.cc.o [ 17%] Building CXX object CMakeFiles/fastdeploy.dir/fastdeploy/function/cast.cc.o [ 17%] Building CXX object CMakeFiles/fastdeploy.dir/fastdeploy/function/clip.cc.o [ 17%] Building CXX object CMakeFiles/fastdeploy.dir/fastdeploy/function/concat.cc.o [ 17%] Building CXX object CMakeFiles/fastdeploy.dir/fastdeploy/function/cumprod.cc.o [ 18%] Building CXX object CMakeFiles/fastdeploy.dir/fastdeploy/function/eigen.cc.o [ 18%] Building CXX object CMakeFiles/fastdeploy.dir/fastdeploy/function/elementwise.cc.o [ 18%] Building CXX object CMakeFiles/fastdeploy.dir/fastdeploy/function/full.cc.o [ 18%] Building CXX object CMakeFiles/fastdeploy.dir/fastdeploy/function/gather_scatter_along_axis.cc.o [ 19%] Building CXX object CMakeFiles/fastdeploy.dir/fastdeploy/function/gaussian_random.cc.o [ 19%] Building CXX object CMakeFiles/fastdeploy.dir/fastdeploy/function/isfinite.cc.o [ 19%] Building CXX object CMakeFiles/fastdeploy.dir/fastdeploy/function/linspace.cc.o [ 20%] Building CXX object CMakeFiles/fastdeploy.dir/fastdeploy/function/math.cc.o [ 20%] Building CXX object CMakeFiles/fastdeploy.dir/fastdeploy/function/pad.cc.o [ 20%] Building CXX object CMakeFiles/fastdeploy.dir/fastdeploy/function/quantile.cc.o [ 20%] Building CXX object CMakeFiles/fastdeploy.dir/fastdeploy/function/reduce.cc.o [ 21%] Building CXX object CMakeFiles/fastdeploy.dir/fastdeploy/function/slice.cc.o [ 21%] Building CXX object CMakeFiles/fastdeploy.dir/fastdeploy/function/softmax.cc.o [ 21%] Building CXX object CMakeFiles/fastdeploy.dir/fastdeploy/function/sort.cc.o [ 21%] Building CXX object CMakeFiles/fastdeploy.dir/fastdeploy/function/split.cc.o [ 22%] Building CXX object CMakeFiles/fastdeploy.dir/fastdeploy/function/tile.cc.o [ 22%] Building CXX object CMakeFiles/fastdeploy.dir/fastdeploy/function/transpose.cc.o [ 22%] Building CXX object CMakeFiles/fastdeploy.dir/fastdeploy/runtime/enum_variables.cc.o [ 23%] Building CXX object CMakeFiles/fastdeploy.dir/fastdeploy/runtime/runtime.cc.o [ 23%] Building CXX object CMakeFiles/fastdeploy.dir/fastdeploy/runtime/runtime_option.cc.o [ 23%] Building CXX object CMakeFiles/fastdeploy.dir/fastdeploy/utils/utils.cc.o [ 23%] Building CUDA object CMakeFiles/fastdeploy.dir/fastdeploy/function/cuda_cast.cu.o [ 24%] Building CUDA object CMakeFiles/fastdeploy.dir/fastdeploy/runtime/backends/common/cuda/adaptive_pool2d_kernel.cu.o [ 24%] Building CXX object CMakeFiles/fastdeploy.dir/fastdeploy/runtime/backends/ort/ops/adaptive_pool2d.cc.o [ 24%] Building CXX object CMakeFiles/fastdeploy.dir/fastdeploy/runtime/backends/ort/ops/multiclass_nms.cc.o [ 24%] Building CXX object CMakeFiles/fastdeploy.dir/fastdeploy/runtime/backends/ort/ort_backend.cc.o [ 24%] Building CXX object CMakeFiles/fastdeploy.dir/fastdeploy/runtime/backends/paddle/paddle_backend.cc.o [ 25%] Building CXX object CMakeFiles/fastdeploy.dir/fastdeploy/runtime/backends/ort/utils.cc.o /home/dlinano/FastDeploy/fastdeploy/runtime/backends/paddle/paddle_backend.cc: In member function ‘void fastdeploy::PaddleBackend::BuildOption(const fastdeploy::PaddleBackendOption&)’: /home/dlinano/FastDeploy/fastdeploy/runtime/backends/paddle/paddle_backend.cc:57:15: error: ‘using Config = struct paddle::AnalysisConfig {aka struct paddle::AnalysisConfig}’ has no member named ‘ExpEnableUseCutlass’ config.Exp_EnableUseCutlass(); ^~~~~~~~ /home/dlinano/FastDeploy/fastdeploy/runtime/backends/paddle/paddle_backend.cc: In member function ‘bool fastdeploy::PaddleBackend::InitFromPaddle(const string&, const string&, bool, const fastdeploy::PaddleBackendOption&)’: /home/dlinano/FastDeploy/fastdeploy/runtime/backends/paddle/paddle_backend.cc:334:35: error: ‘using element_type = class paddle_infer::Predictor {aka class paddle_infer::Predictor}’ has no member named ‘GetInputTensorShape’; did you mean ‘GetInputTypes’? auto inputshapes = predictor->GetInputTensorShape(); ^~~~~~~ GetInputTypes /home/dlinano/FastDeploy/fastdeploy/runtime/backends/paddle/paddle_backend.cc:335:36: error: ‘using element_type = class paddle_infer::Predictor {aka class paddle_infer::Predictor}’ has no member named ‘GetOutputTensorShape’; did you mean ‘GetOutputHandle’? auto outputshapes = predictor->GetOutputTensorShape(); ^~~~~~~~ GetOutputHandle /home/dlinano/FastDeploy/fastdeploy/runtime/backends/paddle/paddle_backend.cc:336:36: error: ‘using element_type = class paddle_infer::Predictor {aka class paddle_infer::Predictor}’ has no member named ‘GetOutputTypes’; did you mean ‘GetInputTypes’? auto outputdtypes = predictor->GetOutputTypes(); ^~~~~~ GetInputTypes CMakeFiles/fastdeploy.dir/build.make:579: recipe for target 'CMakeFiles/fastdeploy.dir/fastdeploy/runtime/backends/paddle/paddle_backend.cc.o' failed make[2]: * [CMakeFiles/fastdeploy.dir/fastdeploy/runtime/backends/paddle/paddle_backend.cc.o] Error 1 make[2]: *** 正在等待未完成的任务....

CMakeFiles/Makefile2:257: recipe for target 'CMakeFiles/fastdeploy.dir/all' failed make[1]: [CMakeFiles/fastdeploy.dir/all] Error 2 Makefile:155: recipe for target 'all' failed make: [all] Error 2 Traceback (most recent call last): File "setup.py", line 465, in license='Apache 2.0') File "/usr/lib/python3/dist-packages/setuptools/init.py", line 129, in setup return distutils.core.setup(**attrs) File "/usr/lib/python3.6/distutils/core.py", line 148, in setup dist.run_commands() File "/usr/lib/python3.6/distutils/dist.py", line 955, in run_commands self.run_command(cmd) File "/usr/lib/python3.6/distutils/dist.py", line 974, in run_command cmd_obj.run() File "/usr/lib/python3.6/distutils/command/build.py", line 135, in run self.run_command(cmd_name) File "/usr/lib/python3.6/distutils/cmd.py", line 313, in run_command self.distribution.run_command(command) File "/usr/lib/python3.6/distutils/dist.py", line 974, in run_command cmd_obj.run() File "setup.py", line 308, in run self.run_command('cmake_build') File "/usr/lib/python3.6/distutils/cmd.py", line 313, in run_command self.distribution.run_command(command) File "/usr/lib/python3.6/distutils/dist.py", line 974, in run_command cmd_obj.run() File "setup.py", line 302, in run subprocess.check_call(build_args) File "/usr/lib/python3.6/subprocess.py", line 311, in check_call raise CalledProcessError(retcode, cmd) subprocess.CalledProcessError: Command '['/usr/local/bin/cmake', '--build', '.', '--', '-j', '4']' returned non-zero exit status 2

jiangjiajun commented 11 months ago

下载的paddle inference库链接发下

jiangming7301 commented 11 months ago

https://paddle-inference-lib.bj.bcebos.com/2.4.2/cxx_c/Jetson/jetpack4.6.1_gcc7.5/all/paddle_inference_install_dir.tgz

jiangming7301 commented 11 months ago

image

jiangjiajun commented 11 months ago

@jiangming7301 注意到你下载的是2.4.2的版本,那这里这行改成set(PADDLEINFERENCE_VERSION "2.4.2")

jiangming7301 commented 11 months ago

编译好了,怎样删除上次安装的fastdeploy?这个编译可以在windows上面编译好,再到jetsonnano安装吗?

jiangming7301 commented 11 months ago

已经删除旧fastdeploy,但每次推理都慢,是不是要设缓存?

jiangjiajun commented 11 months ago
jiangming7301 commented 11 months ago

使用option.use_trt_backend()还是不行,只能用option.use_paddle_backend()可以使用,但是设置option.set_trt_cache_file("./tensorrt_cache/model.trt"),没有办法建立model.trt文件,并且每次还是很慢。 123

jiangjiajun commented 11 months ago
option.use_paddle_backend()
option.paddle_infer_option.enable_trt = True
option.trt_option.serialize_file = "xxxx/model.trt"
option.trt_option.set_shape(....)
jiangming7301 commented 11 months ago

加入 option.paddle_infer_option.enable_trt = True会报错,信息如下: dlinano@jetson-nano:~/FastDeploy/examples/vision/detection/paddledetection/python$ python infer_ppyolo.py --device gpu --use_trt True 100%|█████████████████████████| 171084/171084 [00:23<00:00, 7402.55KB/s] Successfully download model at path: /home/dlinano/.fastdeploy/models/ppyolo_r50vd_dcn_1x_coco WARNING:root:RuntimeOption.set_trt_input_shape will be deprecated in v1.2.0, please use RuntimeOption.trt_option.set_shape() instead. WARNING:root:RuntimeOption.set_trt_input_shape will be deprecated in v1.2.0, please use RuntimeOption.trt_option.set_shape() instead. [INFO] fastdeploy/vision/common/processors/transform.cc(45)::FuseNormalizeCast Normalize and Cast are fused to Normalize in preprocessing pipeline. [INFO] fastdeploy/vision/common/processors/transform.cc(93)::FuseNormalizeHWC2CHW Normalize and HWC2CHW are fused to NormalizeAndPermute in preprocessing pipeline. [INFO] fastdeploy/vision/common/processors/transform.cc(159)::FuseNormalizeColorConvert BGR2RGB and NormalizeAndPermute are fused to NormalizeAndPermute with swap_rb=1 [INFO] fastdeploy/runtime/backends/paddle/paddle_backend.cc(29)::BuildOption Will inference_precision float32 [INFO] fastdeploy/runtime/backends/paddle/paddle_backend.cc(66)::BuildOption Will try to use tensorrt inference with Paddle Backend. [WARNING] fastdeploy/runtime/backends/paddle/paddle_backend.cc(78)::BuildOption Detect that tensorrt cache file has been set to ./tensorrt_cache/model.trt, but while enable paddle2trt, please notice that the cache file will save to the directory where paddle model saved. [INFO] fastdeploy/runtime/backends/paddle/paddle_backend.cc(511)::GetDynamicShapeFromOption image: the max shape = [1, 3, 640, 640], the min shape = [1, 3, 640, 640], the opt shape = [1, 3, 640, 640] [INFO] fastdeploy/runtime/backends/paddle/paddle_backend.cc(511)::GetDynamicShapeFromOption scale_factor: the max shape = [1, 2], the min shape = [1, 2], the opt shape = [1, 2] [INFO] fastdeploy/runtime/backends/paddle/paddle_backend.cc(474)::SetTRTDynamicShapeToConfig Start setting trt dynamic shape. [INFO] fastdeploy/runtime/backends/paddle/paddle_backend.cc(476)::SetTRTDynamicShapeToConfig Finish setting trt dynamic shape. Traceback (most recent call last): File "infer_ppyolo.py", line 67, in model_file, params_file, config_file, runtime_option=runtime_option) File "/usr/local/lib/python3.6/dist-packages/fastdeploy/vision/detection/ppdet/init.py", line 187, in init model_format) RuntimeError:


C++ Traceback (most recent call last):

0 paddle_infer::CreatePredictor(paddle::AnalysisConfig const&) 1 paddle_infer::Predictor::Predictor(paddle::AnalysisConfig const&) 2 std::unique_ptr<paddle::PaddlePredictor, std::default_delete > paddle::CreatePaddlePredictor<paddle::AnalysisConfig, (paddle::PaddleEngineKind)2>(paddle::AnalysisConfig const&) 3 paddle::AnalysisPredictor::Init(std::shared_ptr const&, std::shared_ptr const&) 4 paddle::AnalysisPredictor::PrepareProgram(std::shared_ptr const&) 5 paddle::AnalysisPredictor::OptimizeInferenceProgram() 6 paddle::inference::analysis::Analyzer::RunAnalysis(paddle::inference::analysis::Argument) 7 paddle::inference::analysis::IrAnalysisPass::RunImpl(paddle::inference::analysis::Argument) 8 paddle::inference::analysis::IRPassManager::Apply(std::unique_ptr<paddle::framework::ir::Graph, std::default_delete >) 9 paddle::framework::ir::Pass::Apply(paddle::framework::ir::Graph) const 10 paddle::inference::analysis::TensorRtSubgraphPass::ApplyImpl(paddle::framework::ir::Graph) const 11 paddle::inference::analysis::TensorRtSubgraphPass::CreateTensorRTOp(paddle::framework::ir::Node, paddle::framework::ir::Graph, std::vector<std::string, std::allocator > const&, std::vector<std::string, std::allocator >) const 12 paddle::inference::tensorrt::OpConverter::ConvertBlockToTRTEngine(paddle::framework::BlockDesc, paddle::framework::Scope const&, std::vector<std::string, std::allocator > const&, std::unordered_set<std::string, std::hash, std::equal_to, std::allocator > const&, std::vector<std::string, std::allocator > const&, paddle::inference::tensorrt::TensorRTEngine) 13 phi::enforce::EnforceNotMet::EnforceNotMet(phi::ErrorSummary const&, char const, int) 14 phi::enforce::GetCurrentTraceBackStringabi:cxx11


Error Message Summary:

InvalidArgumentError: some trt inputs dynamic shape info not set, check the INFO log above for more details. [Hint: Expected all_dynamic_shape_set == true, but received all_dynamic_shape_set:0 != true:1.] (at /home/paddle/data/xly/workspace/23303/Paddle/paddle/fluid/inference/tensorrt/convert/op_converter.h:355) ![Uploading 2023-10-11 15-20-24屏幕截图.png…]()

jiangming7301 commented 11 months ago

2023-10-11 15-20-24屏幕截图

jiangming7301 commented 11 months ago

怎么没回复 ?

jiangjiajun commented 11 months ago

你可能需要参考PaddleDetection的文档,重新导出一个PPYOLOE模型。 https://github.com/PaddlePaddle/PaddleDetection/blob/release/2.6/configs/ppyoloe/README_cn.md#%E6%A8%A1%E5%9E%8B%E5%AF%BC%E5%87%BA

image
jiangming7301 commented 11 months ago

使用trt=true导出模型,还是报错:

dlinano@jetson-nano:~/FastDeploy/examples/vision/detection/paddledetection/python$ python infer_ppyolo.py --model_dir model_dirtrt --image test_cai.jpg --device gpu --use_trt True

WARNING:root:RuntimeOption.set_trt_input_shape will be deprecated in v1.2.0, please use RuntimeOption.trt_option.set_shape() instead. WARNING:root:RuntimeOption.set_trt_input_shape will be deprecated in v1.2.0, please use RuntimeOption.trt_option.set_shape() instead. [INFO] fastdeploy/vision/common/processors/transform.cc(45)::FuseNormalizeCast Normalize and Cast are fused to Normalize in preprocessing pipeline. [INFO] fastdeploy/vision/common/processors/transform.cc(93)::FuseNormalizeHWC2CHW Normalize and HWC2CHW are fused to NormalizeAndPermute in preprocessing pipeline. [INFO] fastdeploy/vision/common/processors/transform.cc(159)::FuseNormalizeColorConvert BGR2RGB and NormalizeAndPermute are fused to NormalizeAndPermute with swap_rb=1 [INFO] fastdeploy/runtime/backends/paddle/paddle_backend.cc(29)::BuildOption Will inference_precision float32 [INFO] fastdeploy/runtime/backends/paddle/paddle_backend.cc(66)::BuildOption Will try to use tensorrt inference with Paddle Backend. [WARNING] fastdeploy/runtime/backends/paddle/paddle_backend.cc(78)::BuildOption Detect that tensorrt cache file has been set to ./tensorrt_cache/model.trt, but while enable paddle2trt, please notice that the cache file will save to the directory where paddle model saved. [INFO] fastdeploy/runtime/backends/paddle/paddle_backend.cc(511)::GetDynamicShapeFromOption image: the max shape = [1, 3, 640, 640], the min shape = [1, 3, 640, 640], the opt shape = [1, 3, 640, 640] [INFO] fastdeploy/runtime/backends/paddle/paddle_backend.cc(511)::GetDynamicShapeFromOption scale_factor: the max shape = [1, 2], the min shape = [1, 2], the opt shape = [1, 2] [INFO] fastdeploy/runtime/backends/paddle/paddle_backend.cc(474)::SetTRTDynamicShapeToConfig Start setting trt dynamic shape. [INFO] fastdeploy/runtime/backends/paddle/paddle_backend.cc(476)::SetTRTDynamicShapeToConfig Finish setting trt dynamic shape. Traceback (most recent call last): File "infer_ppyolo.py", line 67, in model_file, params_file, config_file, runtime_option=runtime_option) File "/usr/local/lib/python3.6/dist-packages/fastdeploy/vision/detection/ppdet/init.py", line 187, in init model_format) RuntimeError:


C++ Traceback (most recent call last):

0 paddle_infer::CreatePredictor(paddle::AnalysisConfig const&) 1 paddle_infer::Predictor::Predictor(paddle::AnalysisConfig const&) 2 std::unique_ptr<paddle::PaddlePredictor, std::default_delete > paddle::CreatePaddlePredictor<paddle::AnalysisConfig, (paddle::PaddleEngineKind)2>(paddle::AnalysisConfig const&) 3 paddle::AnalysisPredictor::Init(std::shared_ptr const&, std::shared_ptr const&) 4 paddle::AnalysisPredictor::PrepareProgram(std::shared_ptr const&) 5 paddle::AnalysisPredictor::OptimizeInferenceProgram() 6 paddle::inference::analysis::Analyzer::RunAnalysis(paddle::inference::analysis::Argument) 7 paddle::inference::analysis::IrAnalysisPass::RunImpl(paddle::inference::analysis::Argument) 8 paddle::inference::analysis::IRPassManager::Apply(std::unique_ptr<paddle::framework::ir::Graph, std::default_delete >) 9 paddle::framework::ir::Pass::Apply(paddle::framework::ir::Graph) const 10 paddle::inference::analysis::TensorRtSubgraphPass::ApplyImpl(paddle::framework::ir::Graph) const 11 paddle::inference::analysis::TensorRtSubgraphPass::CreateTensorRTOp(paddle::framework::ir::Node, paddle::framework::ir::Graph, std::vector<std::string, std::allocator > const&, std::vector<std::string, std::allocator >) const 12 paddle::inference::tensorrt::OpConverter::ConvertBlockToTRTEngine(paddle::framework::BlockDesc, paddle::framework::Scope const&, std::vector<std::string, std::allocator > const&, std::unordered_set<std::string, std::hash, std::equal_to, std::allocator > const&, std::vector<std::string, std::allocator > const&, paddle::inference::tensorrt::TensorRTEngine) 13 phi::enforce::EnforceNotMet::EnforceNotMet(phi::ErrorSummary const&, char const, int) 14 phi::enforce::GetCurrentTraceBackStringabi:cxx11


Error Message Summary:

InvalidArgumentError: some trt inputs dynamic shape info not set, check the INFO log above for more details. [Hint: Expected all_dynamic_shape_set == true, but received all_dynamic_shape_set:0 != true:1.] (at /home/paddle/data/xly/workspace/23303/Paddle/paddle/fluid/inference/tensorrt/convert/op_converter.h:355) 现在不加这个option.paddle_infer_option.enable_trt = True,在option.use_paddle_backend()这个很慢,加了option.trt_option.serialize_file = "./tensorrt_cache/model.trt",没法生成model.trt文件

jiangming7301 commented 11 months ago

1111

jiangming7301 commented 11 months ago

这样沟通太慢,也太累了,等很久才有回复

jiangjiajun commented 11 months ago

由于平时工作有其它事情,并没有及时关注github的信息。你可以加我微信,约个时间聊一下。在任意FastDeploy微信群内,找到我的头像,添加我即可。

wf2000cn commented 10 months ago

@jiangjiajun 大佬啊,set(PADDLEINFERENCE_VERSION "0.0.0")这句话终于知道你就是那个人了!一直在网上瞎搜啊。各种小细节都快把人整崩溃了,爆肝了一个多礼拜啊,inference,paddlelite,fastdeploy,opencv,C++,python各种编译出问题,一个个解决。能加哪个微信群啊?搜出来都是过期的,找不到组织啊!我这还有好些问题,求大佬赐教!

因为编译总是遇到哪句REGEX问题,就没有使用Backend直接编译。checkout后直接编译(develop,tag release/1.0.0都试了下),编译后到是能CPU推理,当使用trt推理时就会报错,例子也跑不通了。为了验证,C++和python都编译了运行,结果报一样的错误。引擎能创建成功,但是推理就出错还不知道这是个啥错。 [INFO] fastdeploy/runtime/backends/tensorrt/trt_backend.cc(643)::BuildTrtEngineTensorRT Engine is built successfully. [ERROR] fastdeploy/runtime/backends/tensorrt/trt_backend.cc(239)::log 2: [pluginV2DynamicExtRunner.cpp::execute::115] Error Code 2: Internal Error (Assertion status == kSTATUS_SUCCESS failed. ) [ERROR] fastdeploy/runtime/backends/tensorrt/trt_backend.cc(348)::Infer Failed to Infer with TensorRT. [ERROR] fastdeploy/vision/detection/ppdet/base.cc(73)::BatchPredict Failed to inference by runtime. DetectionResult: [xmin, ymin, xmax, ymax, score, label_id]

wf2000cn commented 10 months ago

使用的是2.5.2的预编译包,增加了set(PADDLEINFERENCE_VERSION "2.5.2")的语句后编译时错误,不知道是哪个东西没搞对?python编译下载的是python对应的推理库。我看页面上写的是让下载c++的? image 后来使用c++库编译后没问题了,准备再次验证trt推理是否有问题。 然后把c++版本也编译了两次,第一次到最后出问题,重启后重新编译还是出问题。错误和系统信息如下: image image

jiangjiajun commented 10 months ago

PPYOLOE模型太大,在Jetson Nano上使用TRT跑不了。可以试试PicoDet

wf2000cn commented 10 months ago

@jiangjiajun 大佬,哪里有技术群可以加,搜了好几个都是官方的公众号或者过期的群号。