microsoft / onnxruntime

ONNX Runtime: cross-platform, high performance ML inferencing and training accelerator
https://onnxruntime.ai
MIT License
14.7k stars 2.93k forks source link

[Build] Error in builiding with Tensorrt EP #14394

Open leilakhalili87 opened 1 year ago

leilakhalili87 commented 1 year ago

Describe the issue

I am getting an error for building onxxruntime with tensorrt EP. cmake version 3.25.2 cuda 11.6 cudnn 8.2.4 onnxruntime 1.13

Urgency

it is very urgent.

Target platform

centos 7

Build script

./build.sh --cudnn_home $cuDNN_PATH --cuda_home $CUDA_PATH --use_tensorrt --tensorrt_home $tensorrt_home

Error / output

[ 50%] Building CXX object CMakeFiles/onnxruntime_providers_cuda.dir/home/leila-khalili/onnxruntime/onnxruntime/core/providers/cuda/test/cuda_execution_provider_test.cc.o [ 50%] Building CXX object CMakeFiles/onnxruntime_providers_cuda.dir/home/leila-khalili/onnxruntime/onnxruntime/core/providers/shared_library/provider_bridge_provider.cc.o [ 50%] Building CUDA object CMakeFiles/onnxruntime_providers_cuda.dir/home/leila-khalili/onnxruntime/onnxruntime/core/providers/cuda/activation/activations_impl.cu.o nvcc warning : The 'compute_35', 'compute_37', 'sm_35', and 'sm_37' architectures are deprecated, and may be removed in a future release (Use -Wno-deprecated-gpu-targets to suppress warning). /home/leila-khalili/onnxruntime/build/Linux/Debug/_deps/abseil_cpp-src/absl/container/internal/inlined_vector.h: In member function ‘void absl::lts_20211102::inlined_vector_internal::Storage<T, N, A>::Swap(absl::lts_20211102::inlined_vector_internal::Storage<T, N, A>*)’: /home/leila-khalili/onnxruntime/build/Linux/Debug/_deps/abseil_cpp-src/absl/container/internal/inlined_vector.h:907:97: error: expected ‘;’ before ‘}’ token allocated_ptr->SetAllocation( ^ ; /home/leila-khalili/onnxruntime/build/Linux/Debug/_deps/abseil_cpp-src/absl/container/internal/inlined_vector.h:915:95: error: expected ‘;’ before ‘}’ token inlined_ptr->SetAllocation( ^ ; gmake[2]: [CMakeFiles/onnxruntime_providers_cuda.dir/home/leila-khalili/onnxruntime/onnxruntime/core/providers/cuda/activation/activations_impl.cu.o] Error 1 gmake[1]: [CMakeFiles/onnxruntime_providers_cuda.dir/all] Error 2 gmake: ** [all] Error 2 Traceback (most recent call last): File "/home/leila-khalili/onnxruntime/tools/ci_build/build.py", line 2812, in sys.exit(main()) File "/home/leila-khalili/onnxruntime/tools/ci_build/build.py", line 2727, in main build_targets(args, cmake_path, build_dir, configs, num_parallel_jobs, args.target) File "/home/leila-khalili/onnxruntime/tools/ci_build/build.py", line 1349, in build_targets run_subprocess(cmd_args, env=env) File "/home/leila-khalili/onnxruntime/tools/ci_build/build.py", line 740, in run_subprocess return run(args, cwd=cwd, capture_stdout=capture_stdout, shell=shell, env=my_env) File "/home/leila-khalili/onnxruntime/tools/python/util/run.py", line 49, in run completed_process = subprocess.run( File "/usr/local/lib/python3.8/subprocess.py", line 516, in run raise CalledProcessError(retcode, process.args, subprocess.CalledProcessError: Command '['/home/leila-khalili/cmake-3.25.2-linux-x86_64/bin/cmake', '--build', '/home/leila-khalili/onnxruntime/build/Linux/Debug', '--config', 'Debug']' returned non-zero exit status 2.

Visual Studio Version

No response

GCC / Compiler Version

8.3.0

jywu-msft commented 1 year ago

the build error doesn't seem related to TensorRT EP. can you confirm what branch you are building? is it rel-1.13.1 ? and do you get the same error if you do a fresh build with a vanilla CPU build ./build.sh or with only CUDA EP ? /build.sh --cudnn_home $cuDNN_PATH --cuda_home $CUDA_PATH --use_cuda

leilakhalili87 commented 1 year ago

I am using this command git clone --recursive --branch v1.13.1 https://github.com/microsoft/onnxruntime.git

I will try to build for cpu and will update you. Thanks