NVIDIA-AI-IOT / deepstream_tao_apps

Sample apps to demonstrate how to deploy models trained with TAO on DeepStream
MIT License
380 stars 97 forks source link

No output #36

Closed spacewalk01 closed 3 years ago

spacewalk01 commented 3 years ago

I used the following command and it loaded the engine successfully and then printed Running... but there is no result and it keeps running while there is only one image given.

./apps/ds-tlt  -c configs/yolov4_tlt/pgie_yolov4_tlt_config.txt -i /home/images/img1.jpg -d -b 1
spacewalk01 commented 3 years ago

Here is the config file:

[property]
gpu-id=0
net-scale-factor=1.0
offsets=103.939;116.779;123.68
model-color-format=1
labelfile-path=yolov4_labels.txt
model-engine-file=../../models/yolov4/yolov4_resnet18.etlt_b1_gpu0_fp16.engine
#int8-calib-file=../../models/yolov4/cal.bin
#tlt-encoded-model=../../models/yolov4/yolov4_resnet18.etlt
tlt-model-key=nvidia_tlt
infer-dims=3;544;960
maintain-aspect-ratio=1
uff-input-order=0
uff-input-blob-name=Input
batch-size=1
## 0=FP32, 1=INT8, 2=FP16 mode
network-mode=2
num-detected-classes=4
interval=0
gie-unique-id=1
is-classifier=0
#network-type=0
cluster-mode=3
output-blob-names=BatchedNMS
parse-bbox-func-name=NvDsInferParseCustomBatchedNMSTLT
custom-lib-path=../../post_processor/libnvds_infercustomparser_tlt.so

[class-attrs-all]
pre-cluster-threshold=0.3
roi-top-offset=0
roi-bottom-offset=0
detected-min-w=0
detected-min-h=0
detected-max-w=0
detected-max-h=0
younglalala commented 3 years ago

Has this problem been solved?

azmathmoosa commented 2 years ago

did you resolve this issue? what was the problem. I am facing similar issue. Deepstream 6.1 Cuda 11.7, TensorRT OSS 8.4.1. Its stuck at running and nothing happens for a long time

root@recodePC:/work/deepstream_tao_apps# ./apps/tao_detection/ds-tao-detection -c configs/yolov4_tao/pgie_yolov4_tao_config_dgpu.txt -i /opt/nvidia/deepstream/deepstream/samples/streams/sample_qHD.h264 
Now playing: configs/yolov4_tao/pgie_yolov4_tao_config_dgpu.txt
WARNING: ../nvdsinfer/nvdsinfer_model_builder.cpp:1482 Deserialize engine failed because file path: /work/deepstream_tao_apps/configs/yolov4_tao/../../models/yolov4/yolov4_resnet18_395.etlt_b1_gpu0_int8.engine open error
0:00:02.004426952   283 0x55e4a4db1e70 WARN                 nvinfer gstnvinfer.cpp:643:gst_nvinfer_logger:<primary-nvinference-engine> NvDsInferContext[UID 1]: Warning from NvDsInferContextImpl::deserializeEngineAndBackend() <nvdsinfer_context_impl.cpp:1888> [UID = 1]: deserialize engine from file :/work/deepstream_tao_apps/configs/yolov4_tao/../../models/yolov4/yolov4_resnet18_395.etlt_b1_gpu0_int8.engine failed
0:00:02.046484432   283 0x55e4a4db1e70 WARN                 nvinfer gstnvinfer.cpp:643:gst_nvinfer_logger:<primary-nvinference-engine> NvDsInferContext[UID 1]: Warning from NvDsInferContextImpl::generateBackendContext() <nvdsinfer_context_impl.cpp:1993> [UID = 1]: deserialize backend context from engine from file :/work/deepstream_tao_apps/configs/yolov4_tao/../../models/yolov4/yolov4_resnet18_395.etlt_b1_gpu0_int8.engine failed, try rebuild
0:00:02.046498849   283 0x55e4a4db1e70 INFO                 nvinfer gstnvinfer.cpp:646:gst_nvinfer_logger:<primary-nvinference-engine> NvDsInferContext[UID 1]: Info from NvDsInferContextImpl::buildModel() <nvdsinfer_context_impl.cpp:1914> [UID = 1]: Trying to create engine from model files
WARNING: [TRT]: onnx2trt_utils.cpp:369: Your ONNX model has been generated with INT64 weights, while TensorRT does not natively support INT64. Attempting to cast down to INT32.
WARNING: [TRT]: onnx2trt_utils.cpp:395: One or more weights outside the range of INT32 was clamped
WARNING: [TRT]: onnx2trt_utils.cpp:395: One or more weights outside the range of INT32 was clamped
WARNING: [TRT]: onnx2trt_utils.cpp:395: One or more weights outside the range of INT32 was clamped
WARNING: [TRT]: onnx2trt_utils.cpp:395: One or more weights outside the range of INT32 was clamped
WARNING: [TRT]: onnx2trt_utils.cpp:395: One or more weights outside the range of INT32 was clamped
WARNING: [TRT]: onnx2trt_utils.cpp:395: One or more weights outside the range of INT32 was clamped
WARNING: [TRT]: onnx2trt_utils.cpp:395: One or more weights outside the range of INT32 was clamped
WARNING: [TRT]: onnx2trt_utils.cpp:395: One or more weights outside the range of INT32 was clamped
WARNING: [TRT]: onnx2trt_utils.cpp:395: One or more weights outside the range of INT32 was clamped
WARNING: [TRT]: onnx2trt_utils.cpp:395: One or more weights outside the range of INT32 was clamped
WARNING: [TRT]: onnx2trt_utils.cpp:395: One or more weights outside the range of INT32 was clamped
WARNING: [TRT]: onnx2trt_utils.cpp:395: One or more weights outside the range of INT32 was clamped
WARNING: [TRT]: onnx2trt_utils.cpp:395: One or more weights outside the range of INT32 was clamped
WARNING: [TRT]: onnx2trt_utils.cpp:395: One or more weights outside the range of INT32 was clamped
WARNING: [TRT]: onnx2trt_utils.cpp:395: One or more weights outside the range of INT32 was clamped
WARNING: [TRT]: onnx2trt_utils.cpp:395: One or more weights outside the range of INT32 was clamped
WARNING: [TRT]: onnx2trt_utils.cpp:395: One or more weights outside the range of INT32 was clamped
WARNING: [TRT]: onnx2trt_utils.cpp:395: One or more weights outside the range of INT32 was clamped
WARNING: [TRT]: onnx2trt_utils.cpp:395: One or more weights outside the range of INT32 was clamped
WARNING: [TRT]: onnx2trt_utils.cpp:395: One or more weights outside the range of INT32 was clamped
WARNING: [TRT]: onnx2trt_utils.cpp:395: One or more weights outside the range of INT32 was clamped
WARNING: [TRT]: builtin_op_importers.cpp:4716: Attribute caffeSemantics not found in plugin node! Ensure that the plugin creator has a default value defined or the engine may fail to build.
WARNING: [TRT]: The getMaxBatchSize() function should not be used with an engine built from a network created with NetworkDefinitionCreationFlag::kEXPLICIT_BATCH flag. This function will always return 1.
WARNING: [TRT]: The getMaxBatchSize() function should not be used with an engine built from a network created with NetworkDefinitionCreationFlag::kEXPLICIT_BATCH flag. This function will always return 1.
0:02:01.923663617   283 0x55e4a4db1e70 INFO                 nvinfer gstnvinfer.cpp:646:gst_nvinfer_logger:<primary-nvinference-engine> NvDsInferContext[UID 1]: Info from NvDsInferContextImpl::buildModel() <nvdsinfer_context_impl.cpp:1946> [UID = 1]: serialize cuda engine to file: /work/deepstream_tao_apps/models/yolov4/yolov4_resnet18_395.etlt_b1_gpu0_fp32.engine successfully
WARNING: [TRT]: The getMaxBatchSize() function should not be used with an engine built from a network created with NetworkDefinitionCreationFlag::kEXPLICIT_BATCH flag. This function will always return 1.
INFO: ../nvdsinfer/nvdsinfer_model_builder.cpp:610 [Implicit Engine Info]: layers num: 5
0   INPUT  kFLOAT Input           3x544x960       
1   OUTPUT kINT32 BatchedNMS      1               
2   OUTPUT kFLOAT BatchedNMS_1    200x4           
3   OUTPUT kFLOAT BatchedNMS_2    200             
4   OUTPUT kFLOAT BatchedNMS_3    200             

0:02:01.956901466   283 0x55e4a4db1e70 INFO                 nvinfer gstnvinfer_impl.cpp:328:notifyLoadModelStatus:<primary-nvinference-engine> [UID 1]: Load new model:configs/yolov4_tao/pgie_yolov4_tao_config_dgpu.txt sucessfully
Running...
^C

Even when the engine is generated, and run still same problem

root@recodePC:/work/deepstream_tao_apps# ./apps/tao_detection/ds-tao-detection -c configs/yolov4_tao/pgie_yolov4_tao_config_dgpu.txt -i /opt/nvidia/deepstream/deepstream/samples/streams/sample_qHD.h264 
Now playing: configs/yolov4_tao/pgie_yolov4_tao_config_dgpu.txt
0:00:01.972944266   300 0x55a932f09e70 INFO                 nvinfer gstnvinfer.cpp:646:gst_nvinfer_logger:<primary-nvinference-engine> NvDsInferContext[UID 1]: Info from NvDsInferContextImpl::deserializeEngineAndBackend() <nvdsinfer_context_impl.cpp:1900> [UID = 1]: deserialized trt engine from :/work/deepstream_tao_apps/models/yolov4/yolov4_resnet18_395.etlt_b1_gpu0_fp32.engine
WARNING: [TRT]: The getMaxBatchSize() function should not be used with an engine built from a network created with NetworkDefinitionCreationFlag::kEXPLICIT_BATCH flag. This function will always return 1.
INFO: ../nvdsinfer/nvdsinfer_model_builder.cpp:610 [Implicit Engine Info]: layers num: 5
0   INPUT  kFLOAT Input           3x544x960       
1   OUTPUT kINT32 BatchedNMS      1               
2   OUTPUT kFLOAT BatchedNMS_1    200x4           
3   OUTPUT kFLOAT BatchedNMS_2    200             
4   OUTPUT kFLOAT BatchedNMS_3    200             

0:00:01.999874523   300 0x55a932f09e70 INFO                 nvinfer gstnvinfer.cpp:646:gst_nvinfer_logger:<primary-nvinference-engine> NvDsInferContext[UID 1]: Info from NvDsInferContextImpl::generateBackendContext() <nvdsinfer_context_impl.cpp:2003> [UID = 1]: Use deserialized engine model: /work/deepstream_tao_apps/models/yolov4/yolov4_resnet18_395.etlt_b1_gpu0_fp32.engine
0:00:02.003895408   300 0x55a932f09e70 INFO                 nvinfer gstnvinfer_impl.cpp:328:notifyLoadModelStatus:<primary-nvinference-engine> [UID 1]: Load new model:configs/yolov4_tao/pgie_yolov4_tao_config_dgpu.txt sucessfully
Running...

The model is yolo v4. I don't know what do to next. I ran with and without display. The output file is 0 bytes in size.