NVIDIA / TensorRT-LLM

TensorRT-LLM provides users with an easy-to-use Python API to define Large Language Models (LLMs) and build TensorRT engines that contain state-of-the-art optimizations to perform inference efficiently on NVIDIA GPUs. TensorRT-LLM also contains components to create Python and C++ runtimes that execute those TensorRT engines.
https://nvidia.github.io/TensorRT-LLM
Apache License 2.0
8.71k stars 996 forks source link

test_cpp.py #2354

Open weizhi-wang opened 1 month ago

weizhi-wang commented 1 month ago

run cpp/tests/resources/scripts/test_cpp.py subprocess.CalledProcessError: Command '['cmake', '--build', '/media/data/wwz/TensorRT-LLM/cpp/build', '--config', 'Release', '-j', '--target', 'modelSpec']' returned non-zero exit status 2.

      If there is an issue finding model_spec.so in engine building, manually build model_spec.so by

make -C cpp/build/ modelSpec error: make: Entering directory '/media/data/wwz/TensorRT-LLM/cpp/build' make: *** No rule to make target 'modelSpec'. Stop. make: Leaving directory '/media/data/wwz/TensorRT-LLM/cpp/build'

Superjomn commented 1 month ago

Thanks for sharing the issue. If you are trying to install from source code, please follow the build-from-source-linux instructions. If not, please offer more information by following issue template.

weizhi-wang commented 1 month ago

@Superjomn ,I build tensorrt llm completely according to the source code( build-from-source-linux ), but there is no target modelSpec' in the generated cpp/build/makefile

weizhi-wang commented 1 month ago

@Superjomn Thank you. In addition, my other functions can be used normally, but the modelSpec function does not work.

github-actions[bot] commented 1 day ago

This issue is stale because it has been open 30 days with no activity. Remove stale label or comment or this will be closed in 15 days."