NVIDIA / TensorRT-LLM

TensorRT-LLM provides users with an easy-to-use Python API to define Large Language Models (LLMs) and build TensorRT engines that contain state-of-the-art optimizations to perform inference efficiently on NVIDIA GPUs. TensorRT-LLM also contains components to create Python and C++ runtimes that execute those TensorRT engines.
https://nvidia.github.io/TensorRT-LLM
Apache License 2.0
8.34k stars 936 forks source link

how two run with 2 gpus #108

Closed UncleFB closed 11 months ago

UncleFB commented 11 months ago

I try to Build LLaMA 7B using 2-way tensor parallelism. But when I execute run.py I got this error.AssertionError: Engine world size (2) != Runtime world size (1)

byshiue commented 11 months ago

You should use

mpirun -n 2 python run.py
byshiue commented 11 months ago

Close this bug. Feel free to reopen if needed.