triton-inference-server / server

The Triton Inference Server provides an optimized cloud and edge inferencing solution.
https://docs.nvidia.com/deeplearning/triton-inference-server/user-guide/docs/index.html
BSD 3-Clause "New" or "Revised" License
7.96k stars 1.44k forks source link

Jetson support of pytorch and PTH-TRT #2240

Closed lorenzo-caruso closed 1 year ago

lorenzo-caruso commented 3 years ago

Hi, I saw that in deepstream SDK 5.0 release pytorch is not supported yet. I would like to ask some questions about this argument.

1) In a future release is planned to support pytorch in deepstream-triton for jetson? And when?

2) When will PTH-TRT be released?

3) PTH-TRT will be included in deepstream-triton?

Thanks, Lorenzo

CoderHam commented 3 years ago
  1. We do plan to add Torch/support to jetson build but this would likely be a couple releases in the future.
  2. Torch-TRT is not currently on our radar. It may be a while till it makes it to x86 and Jetson both.
dyastremsky commented 1 year ago

PyTorch is now supported in Jetson. Closing this issue.