TensorRT-LLM provides users with an easy-to-use Python API to define Large Language Models (LLMs) and build TensorRT engines that contain state-of-the-art optimizations to perform inference efficiently on NVIDIA GPUs. TensorRT-LLM also contains components to create Python and C++ runtimes that execute those TensorRT engines.
We are working on adding support for speculative decoding. There's already support for it in the main branch as a preview (more work is needed to make it more performant).
Hi , Is there any way we can change the decoding logic? example : like support for speculative sampling or other