NVIDIA / TensorRT-LLM

TensorRT-LLM provides users with an easy-to-use Python API to define Large Language Models (LLMs) and build TensorRT engines that contain state-of-the-art optimizations to perform inference efficiently on NVIDIA GPUs. TensorRT-LLM also contains components to create Python and C++ runtimes that execute those TensorRT engines.
https://nvidia.github.io/TensorRT-LLM
Apache License 2.0
8.11k stars 896 forks source link

Is there a plan for FP8 support for MPT models? It seems that FP8 KV cache is already available。 #1291

Open moonlightian opened 5 months ago

moonlightian commented 5 months ago

System Info

None

Who can help?

@Tracin https://github.com/NVIDIA/TensorRT-LLM/tree/main/examples/mpt

Information

Tasks

Reproduction

None

Expected behavior

None

actual behavior

None

additional notes

None

Tracin commented 5 months ago

Hi I think we already support that https://github.com/NVIDIA/TensorRT-LLM/tree/main/examples/mpt#16-fp8-post-training-quantization-with-ammo

github-actions[bot] commented 3 months ago

This issue is stale because it has been open 30 days with no activity. Remove stale label or comment or this will be closed in 15 days."