TensorRT-LLM provides users with an easy-to-use Python API to define Large Language Models (LLMs) and build TensorRT engines that contain state-of-the-art optimizations to perform inference efficiently on NVIDIA GPUs. TensorRT-LLM also contains components to create Python and C++ runtimes that execute those TensorRT engines.
Please if any similar parameter like huggingface transformer past_key_values is supported in tensorrt_llm?
So that It will be possible to calculate the kv cache in advance, then pass it to ModelRunner.generate() or ModelRunnerCpp.generate(), It will speed up the decode.
Please if any similar parameter like huggingface transformer past_key_values is supported in tensorrt_llm? So that It will be possible to calculate the kv cache in advance, then pass it to ModelRunner.generate() or ModelRunnerCpp.generate(), It will speed up the decode.