NVIDIA-AI-IOT / whisper_trt

A project that optimizes Whisper for low latency inference using NVIDIA TensorRT
Other
63 stars 9 forks source link

whisper_trt on Jetson Xavier NX #6

Open guischu09 opened 4 months ago

guischu09 commented 4 months ago

Is there any available container for me to run whisper_trt on a Jetson Xavier NX - Jetpack 5.1.2? If not, what path would you suggest me to pursue to be able to run whisper_trt on a Jetson Xavier NX? Is that possible?

jaybdub commented 3 months ago

Hi @guischu09 ,

Thanks for reaching out. We haven't tried this configuration, but you can give it a go.

The dependencies for whisper_trt are pretty light

  1. PyTorch
  2. Whisper
  3. tensorrt (w. Python API)
  4. torch2trt

You could always try just installing these components manually, or using containers that have them.

Let me know if this helps or you run into any issues.

John