NVIDIA / TensorRT-LLM

TensorRT-LLM provides users with an easy-to-use Python API to define Large Language Models (LLMs) and build TensorRT engines that contain state-of-the-art optimizations to perform inference efficiently on NVIDIA GPUs. TensorRT-LLM also contains components to create Python and C++ runtimes that execute those TensorRT engines.
https://nvidia.github.io/TensorRT-LLM
Apache License 2.0
8.17k stars 905 forks source link

Tensorrt llm support in Jetson Orion Nano #2041

Open krishnarajk opened 1 month ago

krishnarajk commented 1 month ago

Hi,

The nvidia jetson orin nano supports jetpack 6 and a´has cuda 12. I would like to know if it supports running TensorRT-LLM on the NVIDIA Jetson Orin Nano Developer Kit.

Thankk you

ReturnToFirst commented 1 month ago

You can try TensorRT-LLM with dev-sm87-trt101 branch. Before you build TensorRT-LLM, you should upgrade TensorRT version to 10.1.0.

github-actions[bot] commented 1 week ago

This issue is stale because it has been open 30 days with no activity. Remove stale label or comment or this will be closed in 15 days."