Closed xkszltl closed 3 years ago
Thanks for reporting. we'll have to investigate why this pure cmake build fails vs using build.bat (which just invokes cmake internally)
According to the error message, D:\roaster-scratch\onnxruntime\cmake\external\onnx-tensorrt\NvOnnxParser.h(26,10): fatal error C1083: Cannot open include file: 'NvInfer.h' It seems TensorRT is not found. Could you check your TensorRT installation path?
TRT is installed.
Scroll down and you'll see fatal error C1083: Cannot open include file: 'mlas.h'
.
So most likely -I
is broken.
with latest master, I was able to build successfully on Windows using purely cmake. can you please try again with latest master? in the last 5 days, there were a couple PR's that could have affected this. specifically https://github.com/microsoft/onnxruntime/pull/5167 and https://github.com/microsoft/onnxruntime/pull/5218
I used below batch file which has all the cmake build options you referenced above.
mkdir build\Windows\Release && cd build\Windows\Release
cmake ^
-A x64 ^
-DBOOST_ROOT="${Env:ProgramFiles}/boost" ^
-DBUILD_SHARED_LIBS=OFF ^
-DCMAKE_C_FLAGS="/GL /MP /Zi /arch:AVX2" ^
-DCMAKE_CUDA_FLAGS="-gencode=arch=compute_35,code=sm_35 -gencode=arch=compute_37,code=sm_37" ^
-DCMAKE_CXX_FLAGS="/EHsc /GL /MP /Zi /arch:AVX2" ^
-DCMAKE_EXE_LINKER_FLAGS="/DEBUG:FASTLINK /LTCG:incremental" ^
-DCMAKE_INSTALL_PREFIX="${Env:ProgramFiles}/onnxruntime" ^
-DCMAKE_PDB_OUTPUT_DIRECTORY="${PWD}/pdb" ^
-DCMAKE_SHARED_LINKER_FLAGS="/DEBUG:FASTLINK /LTCG:incremental" ^
-DCMAKE_STATIC_LINKER_FLAGS="/LTCG:incremental" ^
-DCUDA_VERBOSE_BUILD=ON ^
-Deigen_SOURCE_PATH="${Env:ProgramFiles}/Eigen3/include/eigen3" ^
-Donnxruntime_BUILD_CSHARP=OFF ^
-Donnxruntime_BUILD_SHARED_LIB=ON ^
-Donnxruntime_CUDA_HOME="C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.0" ^
-Donnxruntime_CUDNN_HOME="C:\cudnn8\cuda" ^
-Donnxruntime_ENABLE_PYTHON=ON ^
-Donnxruntime_RUN_ONNX_TESTS=ON ^
-Donnxruntime_ENABLE_LANGUAGE_INTEROP_OPS=ON ^
-Donnxruntime_ENABLE_LTO=OFF ^
-Donnxruntime_PREFER_SYSTEM_LIB=OFF ^
-Donnxruntime_TENSORRT_HOME="C:\tensorrt\TensorRT-7.1.3.4" ^
-Donnxruntime_USE_CUDA=ON ^
-Donnxruntime_USE_DNNL=ON ^
-Donnxruntime_USE_EIGEN_FOR_BLAS=ON ^
-Donnxruntime_USE_FULL_PROTOBUF=ON ^
-Donnxruntime_USE_JEMALLOC=OFF ^
-Donnxruntime_USE_LLVM=OFF ^
-Donnxruntime_USE_MKLML=OFF ^
-Donnxruntime_USE_NGRAPH=OFF ^
-Donnxruntime_USE_NUPHAR=OFF ^
-Donnxruntime_USE_OPENBLAS=OFF ^
-Donnxruntime_USE_OPENMP=OFF ^
-Donnxruntime_USE_PREINSTALLED_EIGEN=OFF ^
-Donnxruntime_USE_TENSORRT=ON ^
-Donnxruntime_USE_TVM=OFF ^
-G"Visual Studio 16 2019" ^
-T"host=x64" ^
..\..\..\cmake
cmake --build . --config Release -- -maxcpucount
Great, now it works! Thanks for the help ^_^
Describe the bug
We got a lot of header-not-found errors when building with TensorRT on Windows. It works fine with just CUDA.
Here's the cmake arg for successful build: https://github.com/xkszltl/Roaster/blob/ca52bfccd4c49b2bcbe9526e2f0473ea810a5f48/win/pkgs/ort.ps1#L150-L190 It'll fail if I set
-Donnxruntime_USE_TENSORRT=ON
.System information