NVIDIA / TensorRT-LLM

TensorRT-LLM provides users with an easy-to-use Python API to define Large Language Models (LLMs) and build TensorRT engines that contain state-of-the-art optimizations to perform inference efficiently on NVIDIA GPUs. TensorRT-LLM also contains components to create Python and C++ runtimes that execute those TensorRT engines.
https://nvidia.github.io/TensorRT-LLM
Apache License 2.0
8.74k stars 1k forks source link

fix load_model_on_cpu on qwen/convert_checkpoint.py #2382

Open lkm2835 opened 4 weeks ago

lkm2835 commented 4 weeks ago

The load_model_on_cpu option is not propageted while converting qwen.