dusty-nv / jetson-containers

Machine Learning Containers for NVIDIA Jetson and JetPack-L4T
MIT License
2.38k stars 483 forks source link

Installing ONNXRUNTIME on Jetson Xavier Orin (CUDA 12.2) #573

Open huzaifahshamim opened 4 months ago

huzaifahshamim commented 4 months ago

I am attempting to build a docker container that uses the image: nvcr.io/nvidia/l4t-tensorrt:r8.6.2-devel on a Jetson Orin

At the same time, I am trying to install onnxruntime-gpu using the following wheels: https://elinux.org/Jetson_Zoo#ONNX_Runtime (onnxrt 1.18.0 python310)

However, this onnxruntime build uses cuda11 while the image uses cuda12. I get errors that cudnnn and cublas 11 .so libraries cannot be found.

Where can I find the most recent onnxruntime-gpu that was built on CUDA12.2?

dusty-nv commented 4 months ago

@huzaifahshamim this container includes TensorRT: https://github.com/dusty-nv/jetson-containers/tree/master/packages/onnxruntime

Or you can grab the wheels from http://jetson.webredirect.org/ Or if building onnxruntime from jetson-containers, it will rebuild the wheels for a custom configuration when necessary.

huzaifahshamim commented 4 months ago

Thank you for your quick response @dusty-nv .

Attempting to access jetson.webredirect.org gives me this error:

Hmmm… can't reach this page jetson.webredirect.org’s server IP address could not be found. Try:

Search the web for jetson webredirect org Checking the connection Checking the proxy, firewall, and DNS settings

I had some additional dependencies I needed installed, so I did not want to solely use that container.

I also had a few follow ups:

  1. My cuda version on the Orin is 12.2 which means that onnxruntime 17.0 should work correct? However, I still run into the cublass 11.0 errors. So, if I was to use the container, which one would I even use?
  2. If I dont want to use the container, how can I access the .org website you linked?
  3. What is the difference between the builder and nonbuilder tags?

Update: I believe I answered 1: Basically using $(autotag onnxruntime) let me know which container image is compatible.

Does this image also contain the devel version for tensorrt as that is what I also need. I would like to be able to use trtexec in the docker container that I spin up

dusty-nv commented 4 months ago

@huzaifahshamim yes I use the tensorrt-devel libraries and tensorrt-python bindings in jetson-containers, and jetson-containers will also build onnxruntime for you if needed.

huzaifahshamim commented 4 months ago

Sounds great, thank you! All is working well so far. I am running into an import error and was wondering what the best method to solve this would be.

I am attempting to replicate the inferencing done here: https://github.com/NVIDIA/TensorRT/blob/main/samples/python/efficientnet/infer.py

On line 24, from cuda import cudart is called, and I get a ModuleNotFound error on that line when I attempt to run infer.py in my container. I know that pip3 install cuda is not the right way to do it. What is the best way to do so?

dusty-nv commented 4 months ago

@huzaifahshamim for that you would want to include the cuda-python package or wheels in your container: