Closed YYLCyylc closed 1 year ago
Which PPYOLOE model did you test?
I use the model ppyoloe_plus_crn_s_80e_coco.onnx recommended in the PPYOLOE.md
Can you send the output from the terminal?
The output is here
root@server58:/opt/nvidia/deepstream/deepstream-6.2/DeepStream-Yolo# deepstream-app -c deepstream_app_test.txt
*** DeepStream: Launched RTSP Streaming at rtsp://localhost:8553/ds-test ***
WARNING: [TRT]: CUDA lazy loading is not enabled. Enabling it can significantly reduce device memory usage. See `CUDA_MODULE_LOADING` in https://docs.nvidia.com/cuda/cuda-c-programming-guide/index.html#env-vars
WARNING: [TRT]: CUDA lazy loading is not enabled. Enabling it can significantly reduce device memory usage. See `CUDA_MODULE_LOADING` in https://docs.nvidia.com/cuda/cuda-c-programming-guide/index.html#env-vars
0:00:06.928878935 16864 0x7fa094002290 INFO nvinfer gstnvinfer.cpp:680:gst_nvinfer_logger:<primary_gie> NvDsInferContext[UID 1]: Info from NvDsInferContextImpl::deserializeEngineAndBackend() <nvdsinfer_context_impl.cpp:1909> [UID = 1]: deserialized trt engine from :/opt/nvidia/deepstream/deepstream-6.2/DeepStream-Yolo/model_b1_gpu0_fp16.engine
WARNING: [TRT]: The getMaxBatchSize() function should not be used with an engine built from a network created with NetworkDefinitionCreationFlag::kEXPLICIT_BATCH flag. This function will always return 1.
INFO: ../nvdsinfer/nvdsinfer_model_builder.cpp:610 [Implicit Engine Info]: layers num: 5
0 INPUT kFLOAT image 3x640x640
1 INPUT kFLOAT scale_factor 2
2 OUTPUT kFLOAT tmp_17 8400x4
3 OUTPUT kFLOAT max_0.tmp_0 8400x1
4 OUTPUT kFLOAT cast_0.tmp_0 8400x1
0:00:06.953664693 16864 0x7fa094002290 INFO nvinfer gstnvinfer.cpp:680:gst_nvinfer_logger:<primary_gie> NvDsInferContext[UID 1]: Info from NvDsInferContextImpl::generateBackendContext() <nvdsinfer_context_impl.cpp:2012> [UID = 1]: Use deserialized engine model: /opt/nvidia/deepstream/deepstream-6.2/DeepStream-Yolo/model_b1_gpu0_fp16.engine
0:00:06.962430478 16864 0x7fa094002290 INFO nvinfer gstnvinfer_impl.cpp:328:notifyLoadModelStatus:<primary_gie> [UID 1]: Load new model:/opt/nvidia/deepstream/deepstream-6.2/DeepStream-Yolo/config_infer_primary_ppyoloe_plus.txt sucessfully
Runtime commands:
h: Print this help
q: Quit
p: Pause
r: Resume
NOTE: To expand a source in the 2D tiled display and view object details, left-click on the source.
To go back to the tiled display, right-click anywhere on the window.
**PERF: FPS 0 (Avg)
**PERF: 0.00 (0.00)
** INFO: <bus_callback:239>: Pipeline ready
** INFO: <bus_callback:225>: Pipeline running
** INFO: <bus_callback:225>: Pipeline running
Segmentation fault (core dumped)
Actually, I successfully tested and ran the code on Jetson Xavier NX with JetPack 4.6.2 and DeepStream 6.0.1. So, it seems that there might be an issue with my server environment configuration. Thank you for your assistance. Additionally, do you know if it's possible to install the latest version of DeepStream (L4T 6.2) using Docker, if my local JetPack version is 4.6.2?
As far as I know, it's not possible.
Thank you very much
environment: deepstream=6.2 in docker i tried as the PPYOLOE.md says, and the engine is generated successfully. but when pipeline running the error segmentation fault(core dumped) occurred. i tried yolov8 and it works fine.