Open huzaifahshamim opened 4 months ago
@huzaifahshamim this container includes TensorRT: https://github.com/dusty-nv/jetson-containers/tree/master/packages/onnxruntime
Or you can grab the wheels from http://jetson.webredirect.org/ Or if building onnxruntime from jetson-containers, it will rebuild the wheels for a custom configuration when necessary.
Thank you for your quick response @dusty-nv .
Attempting to access jetson.webredirect.org gives me this error:
Hmmm… can't reach this page jetson.webredirect.org’s server IP address could not be found. Try:
Search the web for jetson webredirect org Checking the connection Checking the proxy, firewall, and DNS settings
I had some additional dependencies I needed installed, so I did not want to solely use that container.
I also had a few follow ups:
Update: I believe I answered 1: Basically using $(autotag onnxruntime) let me know which container image is compatible.
Does this image also contain the devel version for tensorrt as that is what I also need. I would like to be able to use trtexec in the docker container that I spin up
@huzaifahshamim yes I use the tensorrt-devel libraries and tensorrt-python bindings in jetson-containers, and jetson-containers will also build onnxruntime for you if needed.
Sounds great, thank you! All is working well so far. I am running into an import error and was wondering what the best method to solve this would be.
I am attempting to replicate the inferencing done here: https://github.com/NVIDIA/TensorRT/blob/main/samples/python/efficientnet/infer.py
On line 24, from cuda import cudart
is called, and I get a ModuleNotFound error on that line when I attempt to run infer.py in my container. I know that pip3 install cuda is not the right way to do it. What is the best way to do so?
@huzaifahshamim for that you would want to include the cuda-python package or wheels in your container:
I am attempting to build a docker container that uses the image: nvcr.io/nvidia/l4t-tensorrt:r8.6.2-devel on a Jetson Orin
At the same time, I am trying to install onnxruntime-gpu using the following wheels: https://elinux.org/Jetson_Zoo#ONNX_Runtime (onnxrt 1.18.0 python310)
However, this onnxruntime build uses cuda11 while the image uses cuda12. I get errors that cudnnn and cublas 11 .so libraries cannot be found.
Where can I find the most recent onnxruntime-gpu that was built on CUDA12.2?