dusty-nv / jetson-inference

Hello AI World guide to deploying deep-learning inference networks and deep vision primitives with TensorRT and NVIDIA Jetson.
https://developer.nvidia.com/embedded/twodaystoademo
MIT License
7.71k stars 2.97k forks source link

Xavier AGX & Triton Server #1338

Closed eanmikale closed 2 years ago

eanmikale commented 2 years ago

Dusty, I hope you are well. We are working with a large client and Inception team, trying to run a Triton Server node on the Jetson Xavier AGX with Jetpack 4.6 Bare Metal. We would rather not use Docker. We have also unsuccessfully tried TAO-Iot. However, the source documentation is scattered for Triton Server, but clean for TAO-Iot, which requires CUDA 11.3. And I saw the section on the Quick Start Guide, state "Coming Soon" for Jetson 4.6. Is there any update on that or other advanced documentation you might be able to provide for a Bare Metal Triton Server node? Cyber-security protocols used? Thank you.

dusty-nv commented 2 years ago

Hi @eanmikale, I recommend that you post this to either the Jetson AGX Xavier forum or the Triton Server github. Wish you the best of luck with your project!