PaddlePaddle / Paddle

PArallel Distributed Deep LEarning: Machine Learning Framework from Industrial Practice (『飞桨』核心框架,深度学习&机器学习高性能单机、分布式训练和跨平台部署)
http://www.paddlepaddle.org/
Apache License 2.0
22.12k stars 5.55k forks source link

能否提供paddlepaddle-gpu在jetpack5.0.2预编译包 #51093

Closed futureflsl closed 1 year ago

futureflsl commented 1 year ago

问题描述 Issue Description

我看了官网最高支持有jetpack4.6.1预编译包,但是jetpack5.0.2+cuda11.4没有,能否提供一个,我这边源码编译老是出错

版本&环境信息 Version & Environment Information

我看了官网最高支持有jetpack4.6.1预编译包,但是jetpack5.0.2+cuda11.4没有,能否提供一个,我这边源码编译老是出错 环境: Software part of jetson-stats 4.1.5 - (c) 2023, Raffaello Bonghi Model: NVIDIA Jetson Xavier NX Developer Kit - Jetpack 5.0.2 GA [L4T 35.1.0] NV Power Mode: MODE_20W_6CORE - Type: 8 jtop:

安装paddle报错:

nvidia@nvidia-desktop:~/lu/Paddle/build$ cmake .. -DWITH_CONTRIB=OFF -DWITH_MKL=OFF -DWITH_MKLDNN=OFF -DWITH_TESTING=OFF -DCMAKE_BUILD_TYPE=Release -DON_INFER=ON -DWITH_PYTHON=OFF -DWITH_XBYAK=OFF -DWITH_NV_JETSON=ON CMake Deprecation Warning at CMakeLists.txt:25 (cmake_policy): The OLD behavior for policy CMP0026 will be removed from a future version of CMake.

The cmake-policies(7) manual explains that the OLD behaviors of all policies are deprecated and that a policy should be set to OLD only under specific short-term circumstances. Projects should be ported to the NEW behavior and not rely on setting a policy to OLD.

-- Found Paddle host system: ubuntu, version: 20.04.4 -- Found Paddle host system's CPU: 6 cores -- The CXX compiler identification is GNU 9.4.0 -- The C compiler identification is GNU 9.4.0 -- Check for working CXX compiler: /usr/bin/c++ -- Check for working CXX compiler: /usr/bin/c++ -- works -- Detecting CXX compiler ABI info -- Detecting CXX compiler ABI info - done -- Detecting CXX compile features -- Detecting CXX compile features - done -- Check for working C compiler: /usr/bin/cc -- Check for working C compiler: /usr/bin/cc -- works -- Detecting C compiler ABI info -- Detecting C compiler ABI info - done -- Detecting C compile features -- Detecting C compile features - done -- Looking for pthread.h -- Looking for pthread.h - found -- Performing Test CMAKE_HAVE_LIBC_PTHREAD -- Performing Test CMAKE_HAVE_LIBC_PTHREAD - Failed -- Looking for pthread_create in pthreads -- Looking for pthread_create in pthreads - not found -- Looking for pthread_create in pthread -- Looking for pthread_create in pthread - found -- Found Threads: TRUE for cuda before 11.7, libcudart.so must be used for the lazy module loading trick to work, instead of libcudart_static.a -- The CUDA compiler identification is NVIDIA 11.4.239 -- Check for working CUDA compiler: /usr/local/cuda-11.4/bin/nvcc -- Check for working CUDA compiler: /usr/local/cuda-11.4/bin/nvcc -- works -- Detecting CUDA compiler ABI info -- Detecting CUDA compiler ABI info - done -- CUDA compiler: /home/nvidia/lu/Paddle/tools/nvcc_lazy, version: NVIDIA 11.4.239 -- CXX compiler: /usr/bin/c++, version: GNU 9.4.0 -- C compiler: /usr/bin/cc, version: GNU 9.4.0 -- AR tools: /usr/bin/ar -- Found Git: /usr/bin/git (found version "2.25.1") -- Performing Test MMX_FOUND -- Performing Test MMX_FOUND - Failed -- Performing Test SSE2_FOUND -- Performing Test SSE2_FOUND - Failed -- Performing Test SSE3_FOUND -- Performing Test SSE3_FOUND - Failed -- Performing Test AVX_FOUND -- Performing Test AVX_FOUND - Failed -- Performing Test AVX2_FOUND -- Performing Test AVX2_FOUND - Failed -- Performing Test AVX512F_FOUND -- Performing Test AVX512F_FOUND - Failed -- Current NCCL header is /usr/local/include/nccl.h. Current NCCL version is v2804. -- CUDA detected: 11.4.239 -- WARNING: This is just a warning for publishing release. You are building GPU version without supporting different architectures. So the wheel package may fail on other GPU architectures. You can add -DCUDA_ARCH_NAME=All in cmake command to get a full wheel package to resolve this warning. While, this version will still work on local GPU architecture. -- NVCC_FLAGS_EXTRA: -gencode arch=compute_72,code=sm_72 101201-- Current cuDNN header is /usr/include/cudnn_version.h Current cuDNN version is v8.4.1. CMake Warning at CMakeLists.txt:484 (message): Disable RCCL when compiling without ROCM. Force WITH_RCCL=OFF.

-- warp-ctc library: /home/nvidia/lu/Paddle/build/third_party/install/warpctc/lib/libwarpctc.so -- warp-rnnt library: /home/nvidia/lu/Paddle/build/third_party/install/warprnnt/lib/libwarprnnt.so -- Build OpenBLAS by External Project (include: /home/nvidia/lu/Paddle/build/third_party/install/openblas/include, library: /home/nvidia/lu/Paddle/build/third_party/install/openblas/lib/libopenblas.a) -- CBLAS_PROVIDER: EXTERN_OPENBLAS -- Protobuf protoc executable: /home/nvidia/lu/Paddle/build/third_party/install/protobuf/bin/protoc -- Protobuf-lite library: /home/nvidia/lu/Paddle/build/third_party/install/protobuf/lib/libprotobuf-lite.a -- Protobuf library: /home/nvidia/lu/Paddle/build/third_party/install/protobuf/lib/libprotobuf.a -- Protoc library: /home/nvidia/lu/Paddle/build/third_party/install/protobuf/lib/libprotoc.a -- Protobuf version: 21.12 -- Download dependence[externalError] from https://paddlepaddledeps.bj.bcebos.com/externalErrorMsg_20210928.tar.gz, MD5: a712a49384e77ca216ad866712f7cafa POCKETFFT_INCLUDE_DIR is /home/nvidia/lu/Paddle/build/third_party/pocketfft/src -- Found PythonInterp: /usr/bin/python (found version "2.7.18") CMake Warning at cmake/flags.cmake:12 (message): Found GCC 9.4.0 which is too high, recommended to use GCC 8.2 Call Stack (most recent call first): cmake/flags.cmake:36 (checkcompilercxx14flag) CMakeLists.txt:592 (include)

-- Looking for UINT64_MAX -- Looking for UINT64_MAX - found -- Looking for sys/types.h -- Looking for sys/types.h - found -- Looking for stdint.h -- Looking for stdint.h - found -- Looking for stddef.h -- Looking for stddef.h - found -- Check size of pthread_spinlock_t -- Check size of pthread_spinlock_t - done -- Check size of pthread_barrier_t -- Check size of pthread_barrier_t - done -- Performing Test C_COMPILER_SUPPORT_FLAGfPIC -- Performing Test C_COMPILER_SUPPORT_FLAG__fPIC - Success -- Performing Test CXX_COMPILER_SUPPORT_FLAGfPIC -- Performing Test CXX_COMPILER_SUPPORT_FLAGfPIC - Success -- Performing Test C_COMPILER_SUPPORT_FLAGfno_omit_frame_pointer -- Performing Test C_COMPILER_SUPPORT_FLAGfno_omit_frame_pointer - Success -- Performing Test CXX_COMPILER_SUPPORT_FLAGfno_omit_frame_pointer -- Performing Test CXX_COMPILER_SUPPORT_FLAGfno_omit_frame_pointer - Success -- Performing Test C_COMPILER_SUPPORT_FLAG__Werror -- Performing Test C_COMPILER_SUPPORT_FLAGWerror - Success -- Performing Test CXX_COMPILER_SUPPORT_FLAGWerror -- Performing Test CXX_COMPILER_SUPPORT_FLAG__Werror - Success -- Performing Test C_COMPILER_SUPPORT_FLAGWall -- Performing Test C_COMPILER_SUPPORT_FLAGWall - Success -- Performing Test CXX_COMPILER_SUPPORT_FLAG__Wall -- Performing Test CXX_COMPILER_SUPPORT_FLAGWall - Success -- Performing Test C_COMPILER_SUPPORT_FLAGWextra -- Performing Test C_COMPILER_SUPPORT_FLAG__Wextra - Success -- Performing Test CXX_COMPILER_SUPPORT_FLAGWextra -- Performing Test CXX_COMPILER_SUPPORT_FLAGWextra - Success -- Performing Test C_COMPILER_SUPPORT_FLAG__Wnon_virtual_dtor -- Performing Test C_COMPILER_SUPPORT_FLAGWnon_virtual_dtor - Failed -- Performing Test CXX_COMPILER_SUPPORT_FLAGWnon_virtual_dtor -- Performing Test CXX_COMPILER_SUPPORT_FLAG__Wnon_virtual_dtor - Success -- Performing Test C_COMPILER_SUPPORT_FLAGWdelete_non_virtual_dtor -- Performing Test C_COMPILER_SUPPORT_FLAGWdelete_non_virtual_dtor - Failed -- Performing Test CXX_COMPILER_SUPPORT_FLAGWdelete_non_virtual_dtor -- Performing Test CXX_COMPILER_SUPPORT_FLAGWdelete_non_virtual_dtor - Success -- Performing Test C_COMPILER_SUPPORT_FLAGWno_unused_parameter -- Performing Test C_COMPILER_SUPPORT_FLAGWno_unused_parameter - Success -- Performing Test CXX_COMPILER_SUPPORT_FLAG__Wno_unused_parameter -- Performing Test CXX_COMPILER_SUPPORT_FLAGWno_unused_parameter - Success -- Performing Test C_COMPILER_SUPPORT_FLAGWno_unused_function -- Performing Test C_COMPILER_SUPPORT_FLAG__Wno_unused_function - Success -- Performing Test CXX_COMPILER_SUPPORT_FLAGWno_unused_function -- Performing Test CXX_COMPILER_SUPPORT_FLAGWno_unused_function - Success -- Performing Test C_COMPILER_SUPPORT_FLAGWno_error_literal_suffix -- Performing Test C_COMPILER_SUPPORT_FLAGWno_error_literal_suffix - Success -- Performing Test CXX_COMPILER_SUPPORT_FLAGWno_error_literal_suffix -- Performing Test CXX_COMPILER_SUPPORT_FLAGWno_error_literal_suffix - Success -- Performing Test C_COMPILER_SUPPORT_FLAGWno_error_array_bounds -- Performing Test C_COMPILER_SUPPORT_FLAGWno_error_array_bounds - Success -- Performing Test CXX_COMPILER_SUPPORT_FLAGWno_error_array_bounds -- Performing Test CXX_COMPILER_SUPPORT_FLAGWno_error_array_bounds - Success -- Performing Test C_COMPILER_SUPPORT_FLAGWno_error_ignored_attributes -- Performing Test C_COMPILER_SUPPORT_FLAGWno_error_ignored_attributes - Success -- Performing Test CXX_COMPILER_SUPPORT_FLAGWno_error_ignored_attributes -- Performing Test CXX_COMPILER_SUPPORT_FLAGWno_error_ignored_attributes - Success -- Performing Test C_COMPILER_SUPPORT_FLAG__Wno_error_terminate -- Performing Test C_COMPILER_SUPPORT_FLAGWno_error_terminate - Success -- Performing Test CXX_COMPILER_SUPPORT_FLAGWno_error_terminate -- Performing Test CXX_COMPILER_SUPPORT_FLAG__Wno_error_terminate - Success -- Performing Test C_COMPILER_SUPPORT_FLAGWno_error_int_in_bool_context -- Performing Test C_COMPILER_SUPPORT_FLAGWno_error_int_in_bool_context - Success -- Performing Test CXX_COMPILER_SUPPORT_FLAGWno_error_int_in_bool_context -- Performing Test CXX_COMPILER_SUPPORT_FLAGWno_error_int_in_bool_context - Success -- Performing Test C_COMPILER_SUPPORT_FLAGWimplicit_fallthrough_0 -- Performing Test C_COMPILER_SUPPORT_FLAGWimplicit_fallthrough_0 - Success -- Performing Test CXX_COMPILER_SUPPORT_FLAG__Wimplicit_fallthrough_0 -- Performing Test CXX_COMPILER_SUPPORT_FLAGWimplicit_fallthrough_0 - Success -- Performing Test C_COMPILER_SUPPORT_FLAGWno_error_maybe_uninitialized -- Performing Test C_COMPILER_SUPPORT_FLAGWno_error_maybe_uninitialized - Success -- Performing Test CXX_COMPILER_SUPPORT_FLAGWno_error_maybe_uninitialized -- Performing Test CXX_COMPILER_SUPPORT_FLAGWno_error_maybe_uninitialized - Success -- Performing Test C_COMPILER_SUPPORT_FLAGWno_ignored_qualifiers -- Performing Test C_COMPILER_SUPPORT_FLAG__Wno_ignored_qualifiers - Success -- Performing Test CXX_COMPILER_SUPPORT_FLAGWno_ignored_qualifiers -- Performing Test CXX_COMPILER_SUPPORT_FLAGWno_ignored_qualifiers - Success -- Performing Test C_COMPILER_SUPPORT_FLAG__Wno_ignored_attributes -- Performing Test C_COMPILER_SUPPORT_FLAGWno_ignored_attributes - Success -- Performing Test CXX_COMPILER_SUPPORT_FLAGWno_ignored_attributes -- Performing Test CXX_COMPILER_SUPPORT_FLAG__Wno_ignored_attributes - Success -- Performing Test C_COMPILER_SUPPORT_FLAGWno_parentheses -- Performing Test C_COMPILER_SUPPORT_FLAGWno_parentheses - Success -- Performing Test CXX_COMPILER_SUPPORT_FLAGWno_parentheses -- Performing Test CXX_COMPILER_SUPPORT_FLAGWno_parentheses - Success -- Performing Test C_COMPILER_SUPPORT_FLAGWno_error_unused_local_typedefs -- Performing Test C_COMPILER_SUPPORT_FLAG__Wno_error_unused_local_typedefs - Success -- Performing Test C_COMPILER_SUPPORT_FLAGWno_error_unused_function -- Performing Test C_COMPILER_SUPPORT_FLAGWno_error_unused_function - Success -- Paddle version is 0.0.0 -- Found CUDA: /usr/local/cuda-11.4 (found version "11.4") -- On inference mode, will take place some specific optimization. create or copy auto-geneated tensor files /usr/bin/python: No module named pip File "/home/nvidia/lu/Paddle/paddle/phi/api/yaml/generator/tensor_operants_gen.py", line 461 """ ^ SyntaxError: invalid syntax CMake Error at paddle/phi/api/lib/CMakeLists.txt:257 (message): tensor codegen failed, exiting.

-- Configuring incomplete, errors occurred! See also "/home/nvidia/lu/Paddle/build/CMakeFiles/CMakeOutput.log". See also "/home/nvidia/lu/Paddle/build/CMakeFiles/CMakeError.log". nvidia@nvidia-desktop:~/lu/Paddle/build$ jetson_release Software part of jetson-stats 4.1.5 - (c) 2023, Raffaello Bonghi Model: NVIDIA Jetson Xavier NX Developer Kit - Jetpack 5.0.2 GA [L4T 35.1.0] NV Power Mode: MODE_20W_6CORE - Type: 8

paddle-bot[bot] commented 1 year ago

您好,我们已经收到了您的问题,会安排技术人员尽快解答您的问题,请耐心等待。请您再次检查是否提供了清晰的问题描述、复现代码、环境&版本、报错信息等。同时,您也可以通过查看官网API文档常见问题历史IssueAI社区来寻求解答。祝您生活愉快~

Hi! We've received your issue and please be patient to get responded. We will arrange technicians to answer your questions as soon as possible. Please make sure that you have posted enough message to demo your request. You may also check out the APIFAQGithub Issue and AI community to get the answer.Have a nice day!

engineer1109 commented 1 year ago

没装pip 自己改一下

futureflsl commented 1 year ago

没装pip 自己改一下

这个问题解决了,不过下载第三方库没有梯子根本下不下来,已经编译一天了,还没成功

ZhangHandi commented 1 year ago

你好,paddle2.4.2版本会发布jetpack5.0.2+cuda11.4的预编译包,可以等待一下更新

lw-2017 commented 1 year ago

没装pip 自己改一下

这个问题解决了,不过下载第三方库没有台阶根本下不了,已经编译一天了,还没成功

老哥,你jetpack5.0.2+cuda11.4 编译成功了吗,我也是自己编译老是出问题,你编译成功了的没,编译成功了话是否可以分享一下,感谢!

sck-star commented 1 year ago

编译成功了吗?