NVIDIA / TensorRT-Model-Optimizer

TensorRT Model Optimizer is a unified library of state-of-the-art model optimization techniques such as quantization, pruning, distillation, etc. It compresses deep learning models for downstream deployment frameworks like TensorRT-LLM or TensorRT to optimize inference speed on NVIDIA GPUs.
https://nvidia.github.io/TensorRT-Model-Optimizer
Other
581 stars 44 forks source link

[Feature Request]Support for Encoder decoder models . #21

Open ashwin-js opened 5 months ago

ashwin-js commented 5 months ago

I am working with madlad400 which is a encoder decoder model based on T5 architecture. I am able to load it in TensorRT LLM in the bfloat16 type . I was wondering if its possible to get int 4 support for the same

cjluo-omniml commented 5 months ago

Thanks @ashwin-js for the feature request. It is on our roadmap for the following releases.