Open sai-ssauto opened 9 months ago
L4T BSP Version: L4T R32.5.1 [TRT] downloading tao-converter from https://api.ngc.nvidia.com/v2/resources/nvidia/tao/tao-converter/versions/v3.21.11_trt8.0_aarch64/files/tao-converter ./tao-converter: error while loading shared libraries: libnvinfer.so.8: cannot open shared object file: No such file or directory
@sai-ssauto I think your issue is that you are running an older version of JetPack-L4T from before there was a compatible tao-converter - would recommend updating to JetPack 4.6 / L4T R32.7 for Nano/TX1/TX2 or JetPack 5.1 / L4T R35 for Xavier/Orin.
python3 detectnet.py --model=peoplenet pedestrians.mp4 pedestrians_peoplenet.mp4 [gstreamer] initialized gstreamer, version 1.14.5.0 [gstreamer] gstDecoder -- creating decoder for pedestrians.mp4 Opening in BLOCKING MODE Opening in BLOCKING MODE NvMMLiteOpen : Block : BlockType = 261 NVMEDIA: Reading vendor.tegra.display-size : status: 6 NvMMLiteBlockCreate : Block : BlockType = 261 [gstreamer] gstDecoder -- discovered video resolution: 960x540 (framerate 29.970030 Hz) [gstreamer] gstDecoder -- discovered video caps: video/x-h264, stream-format=(string)byte-stream, alignment=(string)au, level=(string)3.1, profile=(string)high, width=(int)960, height=(int)540, framerate=(fraction)30000/1001, pixel-aspect-ratio=(fraction)1/1, interlace-mode=(string)progressive, chroma-format=(string)4:2:0, bit-depth-luma=(uint)8, bit-depth-chroma=(uint)8, parsed=(boolean)true [gstreamer] gstDecoder -- pipeline string: [gstreamer] filesrc location=pedestrians.mp4 ! qtdemux ! queue ! h264parse ! omxh264dec name=decoder ! video/x-raw(memory:NVMM) ! appsink name=mysink [video] created gstDecoder from file:///home/taurus1/jetson-inference/python/training/detection/ssd/pedestrians.mp4
gstDecoder video options:
-- URI: file:///home/taurus1/jetson-inference/python/training/detection/ssd/pedestrians.mp4
extension: mp4 -- deviceType: file -- ioType: input -- codec: H264 -- codecType: omx -- width: 960 -- height: 540 -- frameRate: 29.97 -- numBuffers: 4 -- zeroCopy: true -- flipMethod: none -- loop: 0
[gstreamer] gstEncoder -- codec not specified, defaulting to H.264 [gstreamer] gstEncoder -- pipeline launch string: [gstreamer] appsrc name=mysource is-live=true do-timestamp=true format=3 ! omxh264enc name=encoder bitrate=4000000 ! video/x-h264 ! h264parse ! qtmux ! filesink location=pedestrians_peoplenet.mp4 [video] created gstEncoder from file:///home/taurus1/jetson-inference/python/training/detection/ssd/pedestrians_peoplenet.mp4
gstEncoder video options:
-- URI: file:///home/taurus1/jetson-inference/python/training/detection/ssd/pedestrians_peoplenet.mp4
extension: mp4 -- deviceType: file -- ioType: output -- codec: H264 -- codecType: omx -- frameRate: 30 -- bitRate: 4000000 -- numBuffers: 4 -- zeroCopy: true
[OpenGL] glDisplay -- X screen 0 resolution: 1920x1080 [OpenGL] glDisplay -- X window resolution: 1920x1080 [OpenGL] glDisplay -- display device initialized (1920x1080) [video] created glDisplay from display://0
glDisplay video options:
-- URI: display://0
location: 0 -- deviceType: display -- ioType: output -- width: 1920 -- height: 1080 -- frameRate: 0 -- numBuffers: 4 -- zeroCopy: true
[TRT] running model command: tao-model-downloader.sh peoplenet_deployable_quantized_v2.6.1 ARCH: aarch64 reading L4T version from /etc/nv_tegra_release L4T BSP Version: L4T R32.5.1 [TRT] downloading peoplenet_deployable_quantized_v2.6.1 [TRT] wget failed to download 'resnet34_peoplenet_int8.etlt' (error code=4) [TRT] attempting to retry download of https://api.ngc.nvidia.com/v2/models/nvidia/tao/peoplenet/versions/deployable_quantized_v2.6.1/files/resnet34_peoplenet_int8.etlt (retry 1 of 10) --2023-11-21 03:22:31-- https://api.ngc.nvidia.com/v2/models/nvidia/tao/peoplenet/versions/deployable_quantized_v2.6.1/files/resnet34_peoplenet_int8.etlt Resolving api.ngc.nvidia.com (api.ngc.nvidia.com)... 54.68.100.96, 54.187.192.111 Connecting to api.ngc.nvidia.com (api.ngc.nvidia.com)|54.68.100.96|:443... connected. HTTP request sent, awaiting response... 302 Found Location: https://prod-model-registry-ngc-bucket.s3.us-west-2.amazonaws.com/org/nvidia/team/tao/models/peoplenet/versions/deployable_quantized_v2.6.1/files/resnet34_peoplenet_int8.etlt?response-content-disposition=attachment%3B%20filename%3D%22resnet34_peoplenet_int8.etlt%22&response-content-type=application%2Foctet-stream&X-Amz-Algorithm=AWS4-HMAC-SHA256&X-Amz-Date=20231120T215231Z&X-Amz-SignedHeaders=host&X-Amz-Expires=3600&X-Amz-Credential=AKIA3PSNVSIZ7SU24VXK%2F20231120%2Fus-west-2%2Fs3%2Faws4_request&X-Amz-Signature=b2334506f8a2964d38b92488244b66768e222194a4eacb1fa2c840ff1f6f32fd [following] --2023-11-21 03:22:32-- https://prod-model-registry-ngc-bucket.s3.us-west-2.amazonaws.com/org/nvidia/team/tao/models/peoplenet/versions/deployable_quantized_v2.6.1/files/resnet34_peoplenet_int8.etlt?response-content-disposition=attachment%3B%20filename%3D%22resnet34_peoplenet_int8.etlt%22&response-content-type=application%2Foctet-stream&X-Amz-Algorithm=AWS4-HMAC-SHA256&X-Amz-Date=20231120T215231Z&X-Amz-SignedHeaders=host&X-Amz-Expires=3600&X-Amz-Credential=AKIA3PSNVSIZ7SU24VXK%2F20231120%2Fus-west-2%2Fs3%2Faws4_request&X-Amz-Signature=b2334506f8a2964d38b92488244b66768e222194a4eacb1fa2c840ff1f6f32fd Resolving prod-model-registry-ngc-bucket.s3.us-west-2.amazonaws.com (prod-model-registry-ngc-bucket.s3.us-west-2.amazonaws.com)... 52.92.132.42, 52.92.227.50, 3.5.81.112, ... Connecting to prod-model-registry-ngc-bucket.s3.us-west-2.amazonaws.com (prod-model-registry-ngc-bucket.s3.us-west-2.amazonaws.com)|52.92.132.42|:443... connected. HTTP request sent, awaiting response... 200 OK Length: 89153465 (85M) [application/octet-stream] Saving to: ‘resnet34_peoplenet_int8.etlt’
resnet34peoplenet 100%[===================>] 85.02M 1.42MB/s in 54s
2023-11-21 03:23:27 (1.57 MB/s) - ‘resnet34_peoplenet_int8.etlt’ saved [89153465/89153465]
resnet34peoplenet 100%[===================>] 9.20K --.-KB/s in 0s
net = detectNet(args.network, sys.argv, args.threshold)
Exception: jetson.inference -- detectNet failed to load network
labels.txt 100%[===================>] 17 --.-KB/s in 0s
colors.txt 100%[===================>] 27 --.-KB/s in 0s
[TRT] downloading tao-converter from https://api.ngc.nvidia.com/v2/resources/nvidia/tao/tao-converter/versions/v3.21.11_trt8.0_aarch64/files/tao-converter tao-converter 100%[===================>] 120.72K 246KB/s in 0.5s
detectNet -- converting TAO model to TensorRT engine: -- input resnet34_peoplenet_int8.etlt -- output resnet34_peoplenet_int8.etlt.engine -- calibration resnet34_peoplenet_int8.txt -- encryption_key tlt_encode -- input_dims 3,544,960 -- output_layers output_bbox/BiasAdd,output_cov/Sigmoid -- max_batch_size 1 -- workspace_size 4294967296 -- precision fp16 ./tao-converter: error while loading shared libraries: libnvinfer.so.8: cannot open shared object file: No such file or directory [TRT] failed to convert model 'resnet34_peoplenet_int8.etlt' to TensorRT... [TRT] failed to download model after 2 retries [TRT] if this error keeps occuring, see here for a mirror to download the models from: [TRT] https://github.com/dusty-nv/jetson-inference/releases [TRT] failed to download built-in detection model 'peoplenet' Traceback (most recent call last): File "detectnet.py", line 53, in