TensorRT-LLM provides users with an easy-to-use Python API to define Large Language Models (LLMs) and build TensorRT engines that contain state-of-the-art optimizations to perform inference efficiently on NVIDIA GPUs. TensorRT-LLM also contains components to create Python and C++ runtimes that execute those TensorRT engines.
Many research papers add an additional lm_head or decoder_layer to an LLM.
What is the process in the C++ or pytorch runtime to selectively run a forward pass on inference only on a single layer or head of the model, as is common for example in Medusa decoding?
Many research papers add an additional lm_head or decoder_layer to an LLM.
What is the process in the C++ or pytorch runtime to selectively run a forward pass on inference only on a single layer or head of the model, as is common for example in Medusa decoding?