NVIDIA / TensorRT-LLM

TensorRT-LLM provides users with an easy-to-use Python API to define Large Language Models (LLMs) and build TensorRT engines that contain state-of-the-art optimizations to perform inference efficiently on NVIDIA GPUs. TensorRT-LLM also contains components to create Python and C++ runtimes that execute those TensorRT engines.
https://nvidia.github.io/TensorRT-LLM
Apache License 2.0
8.81k stars 1.01k forks source link

kv cache quant lead to model accuracy loss serious? #2309

Open liguodongiot opened 1 month ago

liguodongiot commented 1 month ago

hi, I found trt-llm kv cache quant lead to model accuracy loss serious, but vllm and lmdeploy only less loss.

Image

Tracin commented 1 month ago

Per-tensor quantization on kv cache can cause large error. Do you have cmmlu results without kv cache quantized?

github-actions[bot] commented 2 weeks ago

This issue is stale because it has been open 30 days with no activity. Remove stale label or comment or this will be closed in 15 days."