Running tests...
Test project /neo-ai-dlr/build
Start 1: dlr_allocator_test
1/12 Test #1: dlr_allocator_test ............... Passed 0.74 sec
Start 2: dlr_common_test
2/12 Test #2: dlr_common_test .................. Passed 0.00 sec
Start 3: dlr_elem_test
3/12 Test #3: dlr_elem_test .................... Passed 5.52 sec
Start 4: dlr_pipeline_test
4/12 Test #4: dlr_pipeline_test ................ Passed 0.01 sec
Start 5: dlr_relayvm_elem_test
5/12 Test #5: dlr_relayvm_elem_test ............ Passed 46.65 sec
Start 6: dlr_relayvm_test
6/12 Test #6: dlr_relayvm_test ................. Passed 43.65 sec
Start 7: dlr_test
7/12 Test #7: dlr_test ......................... Passed 14.83 sec
Start 8: dlr_treelite_test
8/12 Test #8: dlr_treelite_test ................ Passed 0.01 sec
Start 9: dlr_tvm_elem_test
9/12 Test #9: dlr_tvm_elem_test ................ Passed 2.81 sec
Start 10: dlr_tvm_test
10/12 Test #10: dlr_tvm_test ..................... Passed 3.15 sec
Start 11: dlr_dlsym_test
11/12 Test #11: dlr_dlsym_test ................... Passed 4.79 sec
Start 12: dlr_multiple_lib_test
12/12 Test #12: dlr_multiple_lib_test ............ Passed 1.16 sec
100% tests passed, 0 tests failed out of 12
Total Test time (real) = 123.31 sec
GPU
cmake .. -DUSE_CUDA=ON -DUSE_CUDNN=ON -DUSE_TENSORRT=ON -DTENSORRT_INCLUDE_DIR=/tmp/TensorRT-8.4.3.1/include -DTENSORRT_LIB_DIR=/tmp/TensorRT-8.4.3.1/lib/libnvinfer.so && make -j && make test
Output:
Running tests...
Test project /neo-ai-dlr/build
Start 1: dlr_allocator_test
1/12 Test #1: dlr_allocator_test ............... Passed 0.78 sec
Start 2: dlr_common_test
2/12 Test #2: dlr_common_test .................. Passed 0.03 sec
Start 3: dlr_elem_test
3/12 Test #3: dlr_elem_test .................... Passed 5.54 sec
Start 4: dlr_pipeline_test
4/12 Test #4: dlr_pipeline_test ................ Passed 0.03 sec
Start 5: dlr_relayvm_elem_test
5/12 Test #5: dlr_relayvm_elem_test ............ Passed 46.51 sec
Start 6: dlr_relayvm_test
6/12 Test #6: dlr_relayvm_test ................. Passed 43.70 sec
Start 7: dlr_test
7/12 Test #7: dlr_test ......................... Passed 14.88 sec
Start 8: dlr_treelite_test
8/12 Test #8: dlr_treelite_test ................ Passed 0.04 sec
Start 9: dlr_tvm_elem_test
9/12 Test #9: dlr_tvm_elem_test ................ Passed 2.87 sec
Start 10: dlr_tvm_test
10/12 Test #10: dlr_tvm_test ..................... Passed 3.20 sec
Start 11: dlr_dlsym_test
11/12 Test #11: dlr_dlsym_test ................... Passed 3.83 sec
Start 12: dlr_multiple_lib_test
12/12 Test #12: dlr_multiple_lib_test ............ Passed 1.23 sec
100% tests passed, 0 tests failed out of 12
Total Test time (real) = 122.64 sec
The commit: https://github.com/apache/tvm/pull/12692 has caused DLR build for GPU to Fail
Error:
Solution:
bool_constant
is C++17 standard, gcc and nvcc are set to C++14. Two changes are required Change this to 14 -> 17. https://github.com/neo-ai/neo-ai-dlr/blob/release-1.13.0/CMakeLists.txt#L183 (I can create a commit and you can cherry pick and merge it to release branch) C++ 17 for cuda is supported in CMake 3.18 onwardsTesting
CPU
cmake .. && make -j && make test
Output:GPU
cmake .. -DUSE_CUDA=ON -DUSE_CUDNN=ON -DUSE_TENSORRT=ON -DTENSORRT_INCLUDE_DIR=/tmp/TensorRT-8.4.3.1/include -DTENSORRT_LIB_DIR=/tmp/TensorRT-8.4.3.1/lib/libnvinfer.so && make -j && make test
Output: