dusty-nv / jetson-inference

Hello AI World guide to deploying deep-learning inference networks and deep vision primitives with TensorRT and NVIDIA Jetson.
https://developer.nvidia.com/embedded/twodaystoademo
MIT License
7.72k stars 2.97k forks source link

Unable to run peoplenet model with detectnet program #1759

Open sai-ssauto opened 9 months ago

sai-ssauto commented 9 months ago

python3 detectnet.py --model=peoplenet pedestrians.mp4 pedestrians_peoplenet.mp4 [gstreamer] initialized gstreamer, version 1.14.5.0 [gstreamer] gstDecoder -- creating decoder for pedestrians.mp4 Opening in BLOCKING MODE Opening in BLOCKING MODE NvMMLiteOpen : Block : BlockType = 261 NVMEDIA: Reading vendor.tegra.display-size : status: 6 NvMMLiteBlockCreate : Block : BlockType = 261 [gstreamer] gstDecoder -- discovered video resolution: 960x540 (framerate 29.970030 Hz) [gstreamer] gstDecoder -- discovered video caps: video/x-h264, stream-format=(string)byte-stream, alignment=(string)au, level=(string)3.1, profile=(string)high, width=(int)960, height=(int)540, framerate=(fraction)30000/1001, pixel-aspect-ratio=(fraction)1/1, interlace-mode=(string)progressive, chroma-format=(string)4:2:0, bit-depth-luma=(uint)8, bit-depth-chroma=(uint)8, parsed=(boolean)true [gstreamer] gstDecoder -- pipeline string: [gstreamer] filesrc location=pedestrians.mp4 ! qtdemux ! queue ! h264parse ! omxh264dec name=decoder ! video/x-raw(memory:NVMM) ! appsink name=mysink [video] created gstDecoder from file:///home/taurus1/jetson-inference/python/training/detection/ssd/pedestrians.mp4

gstDecoder video options:

-- URI: file:///home/taurus1/jetson-inference/python/training/detection/ssd/pedestrians.mp4

resnet34peoplenet 100%[===================>] 85.02M 1.42MB/s in 54s

2023-11-21 03:23:27 (1.57 MB/s) - ‘resnet34_peoplenet_int8.etlt’ saved [89153465/89153465]

resnet34peoplenet 100%[===================>] 9.20K --.-KB/s in 0s
labels.txt 100%[===================>] 17 --.-KB/s in 0s
colors.txt 100%[===================>] 27 --.-KB/s in 0s
[TRT] downloading tao-converter from https://api.ngc.nvidia.com/v2/resources/nvidia/tao/tao-converter/versions/v3.21.11_trt8.0_aarch64/files/tao-converter tao-converter 100%[===================>] 120.72K 246KB/s in 0.5s
detectNet -- converting TAO model to TensorRT engine: -- input resnet34_peoplenet_int8.etlt -- output resnet34_peoplenet_int8.etlt.engine -- calibration resnet34_peoplenet_int8.txt -- encryption_key tlt_encode -- input_dims 3,544,960 -- output_layers output_bbox/BiasAdd,output_cov/Sigmoid -- max_batch_size 1 -- workspace_size 4294967296 -- precision fp16 ./tao-converter: error while loading shared libraries: libnvinfer.so.8: cannot open shared object file: No such file or directory [TRT] failed to convert model 'resnet34_peoplenet_int8.etlt' to TensorRT... [TRT] failed to download model after 2 retries [TRT] if this error keeps occuring, see here for a mirror to download the models from: [TRT] https://github.com/dusty-nv/jetson-inference/releases [TRT] failed to download built-in detection model 'peoplenet' Traceback (most recent call last): File "detectnet.py", line 53, in net = detectNet(args.network, sys.argv, args.threshold) Exception: jetson.inference -- detectNet failed to load network

dusty-nv commented 9 months ago

L4T BSP Version: L4T R32.5.1 [TRT] downloading tao-converter from https://api.ngc.nvidia.com/v2/resources/nvidia/tao/tao-converter/versions/v3.21.11_trt8.0_aarch64/files/tao-converter ./tao-converter: error while loading shared libraries: libnvinfer.so.8: cannot open shared object file: No such file or directory

@sai-ssauto I think your issue is that you are running an older version of JetPack-L4T from before there was a compatible tao-converter - would recommend updating to JetPack 4.6 / L4T R32.7 for Nano/TX1/TX2 or JetPack 5.1 / L4T R35 for Xavier/Orin.