marcoslucianops / DeepStream-Yolo

NVIDIA DeepStream SDK 7.0 / 6.4 / 6.3 / 6.2 / 6.1.1 / 6.1 / 6.0.1 / 6.0 / 5.1 implementation for YOLO models
MIT License
1.39k stars 344 forks source link

Use CUDA_VER=11.4 make -C nvdsinfer_custom_impl_Yolo to compile so and run deepstream-app -c deepstream_app_config.txt to play abnormal #405

Open Today-fine opened 1 year ago

Today-fine commented 1 year ago
Starting program: /usr/bin/deepstream-app -c deepstream_app_config.txt 
[Thread debugging using libthread_db enabled]
Using host libthread_db library "/lib/aarch64-linux-gnu/libthread_db.so.1".
[New Thread 0xffffc2e7d900 (LWP 6117)]
[Detaching after vfork from child process 6118]
[Detaching after vfork from child process 6121]
Opening in BLOCKING MODE 
[New Thread 0xffffc088b900 (LWP 6124)]
[New Thread 0xffff8d446900 (LWP 6125)]
[New Thread 0xffff8cc45900 (LWP 6126)]
[New Thread 0xffff7fff9900 (LWP 6127)]
[New Thread 0xffff7f7f8900 (LWP 6128)]
[New Thread 0xffff7eff7900 (LWP 6129)]
0:00:05.319269177  6114 0xaaaaabbd5550 INFO                 nvinfer gstnvinfer.cpp:680:gst_nvinfer_logger:<primary_gie> NvDsInferContext[UID 1]: Info from NvDsInferContextImpl::deserializeEngineAndBackend() <nvdsinfer_context_impl.cpp:1909> [UID = 1]: deserialized trt engine from :/root/DeepStream-Yolo/model_b1_gpu0_fp32.engine
WARNING: [TRT]: The getMaxBatchSize() function should not be used with an engine built from a network created with NetworkDefinitionCreationFlag::kEXPLICIT_BATCH flag. This function will always return 1.
INFO: [Implicit Engine Info]: layers num: 2
0   INPUT  kFLOAT input           3x640x640       
1   OUTPUT kFLOAT output          8400x6          

0:00:05.493905685  6114 0xaaaaabbd5550 INFO                 nvinfer gstnvinfer.cpp:680:gst_nvinfer_logger:<primary_gie> NvDsInferContext[UID 1]: Info from NvDsInferContextImpl::generateBackendContext() <nvdsinfer_context_impl.cpp:2012> [UID = 1]: Use deserialized engine model: /root/DeepStream-Yolo/model_b1_gpu0_fp32.engine
[New Thread 0xffff7d9ee900 (LWP 6132)]
[New Thread 0xffff7c808900 (LWP 6133)]
[New Thread 0xffff6fff9900 (LWP 6134)]
0:00:05.547068200  6114 0xaaaaabbd5550 INFO                 nvinfer gstnvinfer_impl.cpp:328:notifyLoadModelStatus:<primary_gie> [UID 1]: Load new model:/root/DeepStream-Yolo/config_infer_primary_yoloV8.txt sucessfully
[New Thread 0xffff6f7f8900 (LWP 6135)]
[New Thread 0xffff6eff7900 (LWP 6136)]
[New Thread 0xffff6e7f6900 (LWP 6137)]
[New Thread 0xffff6dff5900 (LWP 6138)]

Runtime commands:
        h: Print this help
        q: Quit

        p: Pause
        r: Resume

NOTE: To expand a source in the 2D tiled display and view object details, left-click on the source.
      To go back to the tiled display, right-click anywhere on the window.

**PERF:  FPS 0 (Avg)
**PERF:  0.00 (0.00)
** INFO: <bus_callback:239>: Pipeline ready

[New Thread 0xffff6d7f4900 (LWP 6139)]
[New Thread 0xffff6381a900 (LWP 6140)]
[New Thread 0xffff1fff9900 (LWP 6141)]
Opening in BLOCKING MODE 
NvMMLiteOpen : Block : BlockType = 261 
NVMEDIA: Reading vendor.tegra.display-size : status: 6 
[New Thread 0xffff1f7f8900 (LWP 6142)]
[New Thread 0xffff1eff7900 (LWP 6143)]
[New Thread 0xffff1e7f6900 (LWP 6144)]
NvMMLiteBlockCreate : Block : BlockType = 261 
[New Thread 0xffff1dff5900 (LWP 6145)]
[New Thread 0xffff1d7f4900 (LWP 6146)]
** INFO: <bus_callback:225>: Pipeline running

[New Thread 0xffff1cff3900 (LWP 6147)]
NvMMLiteOpen : Block : BlockType = 4 
===== NVMEDIA: NVENC =====
NvMMLiteBlockCreate : Block : BlockType = 4 
[New Thread 0xffff0bff9900 (LWP 6148)]
[New Thread 0xffff0b7f8900 (LWP 6149)]

Thread 9 "deepstream-app" received signal SIGSEGV, Segmentation fault.
[Switching to Thread 0xffff7d9ee900 (LWP 6132)]
0x0000ffff8c3e2da4 in decodeTensorYolo(float const*, float const*, float const*, unsigned int const&, unsigned int const&, unsigned int const&, std::vector<float, std::allocator<float> > const&) ()
   from /root/DeepStream-Yolo/libnvdsinfer_custom_impl_Yolo.so
(gdb) bt
#0  0x0000ffff8c3e2da4 in decodeTensorYolo(float const*, float const*, float const*, unsigned int const&, unsigned int const&, unsigned int const&, std::vector<float, std::allocator<float> > const&) ()
    at /root/DeepStream-Yolo/libnvdsinfer_custom_impl_Yolo.so
#1  0x0000ffff8c3e3234 in NvDsInferParseCustomYolo(std::vector<NvDsInferLayerInfo, std::allocator<NvDsInferLayerInfo> > const&, NvDsInferNetworkInfo const&, NvDsInferParseDetectionParams const&, std::vector<NvDsInferObjectDetectionInfo, std::allocator<NvDsInferObjectDetectionInfo> >&) () at /root/DeepStream-Yolo/libnvdsinfer_custom_impl_Yolo.so
#2  0x0000ffff8c3e34f0 in NvDsInferParseYolo () at /root/DeepStream-Yolo/libnvdsinfer_custom_impl_Yolo.so
#3  0x0000ffffc16ce960 in  () at /opt/nvidia/deepstream/deepstream-6.2/lib/libnvds_infer.so
#4  0x0000ffffc16b2104 in  () at /opt/nvidia/deepstream/deepstream-6.2/lib/libnvds_infer.so
#5  0x0000ffffc16b2368 in  () at /opt/nvidia/deepstream/deepstream-6.2/lib/libnvds_infer.so
#6  0x0000ffffc16b61d0 in nvdsinfer::NvDsInferContextImpl::dequeueOutputBatch(NvDsInferContextBatchOutput&) () at /opt/nvidia/deepstream/deepstream-6.2/lib/libnvds_infer.so
#7  0x0000ffffc17c58fc in  () at /usr/lib/aarch64-linux-gnu/gstreamer-1.0/deepstream/libnvdsgst_infer.so
#8  0x0000fffff7cc547c in  () at /lib/aarch64-linux-gnu/libglib-2.0.so.0
#9  0x0000fffff70be624 in start_thread (arg=0xfffff7ce8e30) at pthread_create.c:477
#10 0x0000fffff6f5249c in thread_start () at ../sysdeps/unix/sysv/linux/aarch64/clone.S:78
marcoslucianops commented 1 year ago

You are using the wrong engine. Please delete the incorrect engine file and run the deepstream-app to generate the new engine using the files from this repo.

Today-fine commented 11 months ago

Hello, is the engine file that needs to be deleted the engine file I generated with DeepStream-Yolo? It's still the same after I deleted him

marcoslucianops commented 11 months ago

Update the files to the version in this repo and run the deepstream-app to generate the engine file.

Today-fine commented 11 months ago

Thank you for your reply. Yesterday I asked my AI colleague to regenerate the onnx model. So presumably the problem is in generating the onnx model here