TensorRT-LLM provides users with an easy-to-use Python API to define Large Language Models (LLMs) and build TensorRT engines that contain state-of-the-art optimizations to perform inference efficiently on NVIDIA GPUs. TensorRT-LLM also contains components to create Python and C++ runtimes that execute those TensorRT engines.
System Info
GPU - A10
Who can help?
@Tracin
Information
Tasks
examples
folder (such as GLUE/SQuAD, ...)Reproduction
Hi there,
With "--use_weight_only --weight_only_precision int8 --qformat fp8", will the quantization be in INT8 or FP8?
Expected behavior
FP8 dtype when printing out weights.
actual behavior
The weights are in torch.int8.
additional notes
N/A