NVIDIA / TensorRT-LLM

TensorRT-LLM provides users with an easy-to-use Python API to define Large Language Models (LLMs) and build TensorRT engines that contain state-of-the-art optimizations to perform inference efficiently on NVIDIA GPUs. TensorRT-LLM also contains components to create Python and C++ runtimes that execute those TensorRT engines.
https://nvidia.github.io/TensorRT-LLM
Apache License 2.0
8.29k stars 925 forks source link

internlm2-chat-20b model convert_checkpoint.py does not have “--int8_kv_cache” option #1817

Open 256256mjw opened 3 months ago

256256mjw commented 3 months ago

Why is there no --int8_kv_cache option when I want to use convert_checkpoint.py to build int8_kv_cache internlm2-chat-20b model? convert_checkpoint.py is in /TensorRT-LLM/examples/internlm2/convert_checkpoint.py

nv-guomingz commented 3 months ago

internlm2 was implemented by https://github.com/NVIDIA/TensorRT-LLM/pull/1392. The original implementation didn't enable such feature like internlm.

github-actions[bot] commented 2 months ago

This issue is stale because it has been open 30 days with no activity. Remove stale label or comment or this will be closed in 15 days."