janhq / cortex.tensorrt-llm

Cortex.Tensorrt-LLM is a C++ inference library that can be loaded by any server at runtime. It submodules NVIDIA’s TensorRT-LLM for GPU accelerated inference on NVIDIA's GPUs.
https://cortex.jan.ai/docs/cortex-tensorrt-llm
Apache License 2.0
39 stars 2 forks source link

feat: TensorRT-LLM InferenceRequest and stop_words_list #30

Open tikikun opened 7 months ago

tikikun commented 7 months ago
image
github-actions[bot] commented 3 months ago

This issue is stale because it has been open 30 days with no activity. Remove stale label or comment or this will be closed in 15 days."