dusty-nv / jetson-inference

Hello AI World guide to deploying deep-learning inference networks and deep vision primitives with TensorRT and NVIDIA Jetson.
https://developer.nvidia.com/embedded/twodaystoademo
MIT License
7.76k stars 2.97k forks source link

jetpack 5.0.2 xavier nx detectnet not working #1515

Closed immnas closed 1 year ago

immnas commented 1 year ago

Hi,

I complied the jetson-inference following the instruction on "Building the Project from Source" https://github.com/dusty-nv/jetson-inference/blob/master/docs/building-repo-2.md

video-viewer is working fine but when i run the detectnet /dev/video0 it get failed

[TRT] detected model format - UFF (extension '.uff') [TRT] desired precision specified for GPU: FASTEST [TRT] requested fasted precision for device GPU without providing valid calibrator, disabling INT8 [TRT] Unable to determine GPU memory usage [TRT] Unable to determine GPU memory usage [TRT] [MemUsageChange] Init CUDA: CPU +6, GPU +0, now: CPU 24, GPU 0 (MiB) [TRT] CUDA initialization failure with error: 222. Please check your CUDA installation: http://docs.nvidia.com/cuda/cuda-installation-guide-linux/index.html [TRT] DetectNativePrecisions() failed to create TensorRT IBuilder instance [TRT] selecting fastest native precision for GPU: FP32 [TRT] could not find engine cache /usr/local/bin/networks/SSD-Mobilenet-v2/ssd_mobilenet_v2_coco.uff.1.1.8501.GPU.FP32.engine [TRT] cache file invalid, profiling network model on device GPU [TRT] Unable to determine GPU memory usage [TRT] Unable to determine GPU memory usage [TRT] [MemUsageChange] Init CUDA: CPU +0, GPU +0, now: CPU 24, GPU 0 (MiB) [TRT] CUDA initialization failure with error: 222. Please check your CUDA installation: http://docs.nvidia.com/cuda/cuda-installation-guide-linux/index.html Segmentation fault (core dumped)

my env is jetpack 5.0.2 jetson xavier nx cuda 11.8.89 TensorRT 8.5.1.7 cuDNN 8.4.1.50

immnas commented 1 year ago

in side docker its working fine. when i run docker/run.sh and then detectnet /dev/video0

dusty-nv commented 1 year ago

Hi @immnas, the latest version of TensorRT for JetPack is TensorRT 8.4, so my guess is that you installed TensorRT 8.5 for ARM SBSA (which would have been built against a different version of CUDA). I've not tested this project upgrading CUDA 11.4 -> 11.8. The docker container works fine for you because it has CUDA/cuDNN/TensorRT installed inside the container, so those versions would be the versions that came with JetPack 5.0.2.

dusty-nv commented 1 year ago

I also recommend trying to run deviceQuery and trtexec (outside of docker) to confirm that you environment is working first

immnas commented 1 year ago

ok Thanks I am now downgrading my tensorRT/CUDA/cuDDN version.