Closed mnuriyumusak closed 1 year ago
Try using the yolo converter module: savant.converter.yolo
. YOLOv5/v6/v7 models have a different output converter than YOLOv3/v4. If it doesn't help, please attach the error log.
I've already tried it bu got no result. Here is the output logs of the module, it just stops without any reasons. So it stops without any error and my run_source process is hangs since module process is down.
Running docker command
2022-11-24 14:32:48,659 [savant.config.module_config] [INFO] Configure module...
2022-11-24 14:32:48,666 [savant.config.json_resolver] [WARNING] JSON loads fail, returning None for "None".
2022-11-24 14:32:48,689 [savant.config.module_config] [INFO] Configure pipeline elements...
2022-11-24 14:32:48,692 [savant.deepstream.nvinfer.element_config] [INFO] Element nvinfer@detector:v1(name=yolonew): Path to the model files has been set to "/models/yolonew".
2022-11-24 14:32:48,704 [savant.deepstream.nvinfer.element_config] [INFO] Element nvinfer@detector:v1(name=yolonew): Model engine file has been set to "yolov5s.onnx_b1_gpu0_fp16.engine".
2022-11-24 14:32:48,710 [savant.deepstream.nvinfer.element_config] [INFO] Element nvinfer@detector:v1(name=yolonew): Resulting configuration file "/models/yolonew/yolov5s_config_savant.txt" has been saved.
2022-11-24 14:32:48,717 [savant.config.module_config] [INFO] Module configuration is complete.
0:00:00.032236812 76 0x563dc1097860 WARN default gstsf.c:97:gst_sf_create_audio_template_caps: format 0x120000: 'AVR (Audio Visual Research)' is not mapped
0:00:00.032265636 76 0x563dc1097860 WARN default gstsf.c:97:gst_sf_create_audio_template_caps: format 0x180000: 'CAF (Apple Core Audio File)' is not mapped
0:00:00.032269931 76 0x563dc1097860 WARN default gstsf.c:97:gst_sf_create_audio_template_caps: format 0x100000: 'HTK (HMM Tool Kit)' is not mapped
0:00:00.032281035 76 0x563dc1097860 WARN default gstsf.c:97:gst_sf_create_audio_template_caps: format 0xc0000: 'MAT4 (GNU Octave 2.0 / Matlab 4.2)' is not mapped
0:00:00.032288598 76 0x563dc1097860 WARN default gstsf.c:97:gst_sf_create_audio_template_caps: format 0xd0000: 'MAT5 (GNU Octave 2.1 / Matlab 5.0)' is not mapped
0:00:00.032292904 76 0x563dc1097860 WARN default gstsf.c:97:gst_sf_create_audio_template_caps: format 0x210000: 'MPC (Akai MPC 2k)' is not mapped
0:00:00.032297143 76 0x563dc1097860 WARN default gstsf.c:97:gst_sf_create_audio_template_caps: format 0xe0000: 'PVF (Portable Voice Format)' is not mapped
0:00:00.032305661 76 0x563dc1097860 WARN default gstsf.c:97:gst_sf_create_audio_template_caps: format 0x160000: 'SD2 (Sound Designer II)' is not mapped
0:00:00.032310941 76 0x563dc1097860 WARN default gstsf.c:97:gst_sf_create_audio_template_caps: format 0x190000: 'WVE (Psion Series 3)' is not mapped
0:00:00.087359744 76 0x563dc1097860 WARN GST_PLUGIN_LOADING gstplugin.c:792:_priv_gst_plugin_load_file_for_registry: module_open failed: libavcodec.so.58: cannot open shared object file: No such file or directory
(gst-plugin-scanner:76): GStreamer-WARNING **: 14:32:48.849: Failed to load plugin '/usr/lib/x86_64-linux-gnu/gstreamer-1.0/libgstchromaprint.so': libavcodec.so.58: cannot open shared object file: No such file or directory
0:00:00.112745352 76 0x563dc1097860 WARN ladspa gstladspa.c:507:plugin_init:<plugin204> no LADSPA plugins found, check LADSPA_PATH
0:00:00.278103267 76 0x563dc1097860 WARN GST_PLUGIN_LOADING gstplugin.c:792:_priv_gst_plugin_load_file_for_registry: module_open failed: libtritonserver.so: cannot open shared object file: No such file or directory
(gst-plugin-scanner:76): GStreamer-WARNING **: 14:32:49.040: Failed to load plugin '/usr/lib/x86_64-linux-gnu/gstreamer-1.0/deepstream/libnvdsgst_inferserver.so': libtritonserver.so: cannot open shared object file: No such file or directory
0:00:00.414926860 76 0x563dc1097860 WARN GST_PLUGIN_LOADING gstplugin.c:792:_priv_gst_plugin_load_file_for_registry: module_open failed: libucs.so.0: cannot open shared object file: No such file or directory
(gst-plugin-scanner:76): GStreamer-WARNING **: 14:32:49.177: Failed to load plugin '/usr/lib/x86_64-linux-gnu/gstreamer-1.0/deepstream/libnvdsgst_ucx.so': libucs.so.0: cannot open shared object file: No such file or directory
0:00:00.478047698 76 0x563dc1097860 WARN GST_PLUGIN_LOADING gstplugin.c:792:_priv_gst_plugin_load_file_for_registry: module_open failed: librivermax.so.0: cannot open shared object file: No such file or directory
(gst-plugin-scanner:76): GStreamer-WARNING **: 14:32:49.240: Failed to load plugin '/usr/lib/x86_64-linux-gnu/gstreamer-1.0/deepstream/libnvdsgst_udp.so': librivermax.so.0: cannot open shared object file: No such file or directory
2022-11-24 14:32:49,525 [savant.mytest] [INFO] Pipeline frame processing parameters: {'width': 1920, 'height': 1080, 'batch-size': 1, 'buffer-pool-size': 4, 'batched-push-timeout': 2000, 'live-source': False, 'interpolation-method': 6}.
2022-11-24 14:32:49,581 [savant.gstreamer.runner] [INFO] Starting pipeline `mytest<NvDsPipeline>: zeromq_source_bin:v1(name=source) -> nvstreammux:v1(name=muxer) -> nvinfer@detector:v1(name=yolonew) -> nvstreamdemux:v1(name=demuxer)`...
0:00:02.889315267 1 0x41ebc40 INFO nvinfer gstnvinfer.cpp:646:gst_nvinfer_logger:<yolonew> NvDsInferContext[UID 2]: Info from NvDsInferContextImpl::deserializeEngineAndBackend() <nvdsinfer_context_impl.cpp:1909> [UID = 2]: deserialized trt engine from :/models/yolonew/yolov5s.onnx_b1_gpu0_fp16.engine
WARNING: [TRT]: The getMaxBatchSize() function should not be used with an engine built from a network created with NetworkDefinitionCreationFlag::kEXPLICIT_BATCH flag. This function will always return 1.
INFO: ../nvdsinfer/nvdsinfer_model_builder.cpp:610 [Implicit Engine Info]: layers num: 2
0 INPUT kFLOAT images 3x640x640
1 OUTPUT kHALF output0 25200x85
0:00:02.927128978 1 0x41ebc40 INFO nvinfer gstnvinfer.cpp:646:gst_nvinfer_logger:<yolonew> NvDsInferContext[UID 2]: Info from NvDsInferContextImpl::generateBackendContext() <nvdsinfer_context_impl.cpp:2012> [UID = 2]: Use deserialized engine model: /models/yolonew/yolov5s.onnx_b1_gpu0_fp16.engine
0:00:02.941748661 1 0x41ebc40 INFO nvinfer gstnvinfer_impl.cpp:328:notifyLoadModelStatus:<yolonew> [UID 2]: Load new model:/models/yolonew/yolov5s_config_savant.txt sucessfully
2022-11-24 14:32:51,699 [savant.gstreamer.runner] [INFO] Pipeline starting ended after 0:00:02.099368.
2022-11-24 14:32:54,886 [savant.avro_video_decode_bin] [INFO] Adding branch with source 1071
2022-11-24 14:32:54,890 [savant.avro_video_decode_bin] [INFO] Branch with source 1071 added
2022-11-24 14:32:54,890 [savant.avro_video_demux] [INFO] Created new src pad for source 1071: src_1071.
0:00:06.141861358 1 0x7feda4005ea0 WARN v4l2 gstv4l2object.c:3057:gst_v4l2_object_get_nearest_size:<nvv4l2decoder0:sink> Unable to try format: Unknown error -1
0:00:06.141887077 1 0x7feda4005ea0 WARN v4l2 gstv4l2object.c:2942:gst_v4l2_object_probe_caps_for_format:<nvv4l2decoder0:sink> Could not probe minimum capture size for pixelformat MJPG
0:00:06.141903334 1 0x7feda4005ea0 WARN v4l2 gstv4l2object.c:3057:gst_v4l2_object_get_nearest_size:<nvv4l2decoder0:sink> Unable to try format: Unknown error -1
0:00:06.141911587 1 0x7feda4005ea0 WARN v4l2 gstv4l2object.c:2948:gst_v4l2_object_probe_caps_for_format:<nvv4l2decoder0:sink> Could not probe maximum capture size for pixelformat MJPG
0:00:06.141928281 1 0x7feda4005ea0 WARN v4l2 gstv4l2object.c:3057:gst_v4l2_object_get_nearest_size:<nvv4l2decoder0:sink> Unable to try format: Unknown error -1
0:00:06.141937300 1 0x7feda4005ea0 WARN v4l2 gstv4l2object.c:2942:gst_v4l2_object_probe_caps_for_format:<nvv4l2decoder0:sink> Could not probe minimum capture size for pixelformat AV10
0:00:06.141941605 1 0x7feda4005ea0 WARN v4l2 gstv4l2object.c:3057:gst_v4l2_object_get_nearest_size:<nvv4l2decoder0:sink> Unable to try format: Unknown error -1
0:00:06.141947209 1 0x7feda4005ea0 WARN v4l2 gstv4l2object.c:2948:gst_v4l2_object_probe_caps_for_format:<nvv4l2decoder0:sink> Could not probe maximum capture size for pixelformat AV10
0:00:06.141957074 1 0x7feda4005ea0 WARN v4l2 gstv4l2object.c:3057:gst_v4l2_object_get_nearest_size:<nvv4l2decoder0:sink> Unable to try format: Unknown error -1
0:00:06.141966982 1 0x7feda4005ea0 WARN v4l2 gstv4l2object.c:2942:gst_v4l2_object_probe_caps_for_format:<nvv4l2decoder0:sink> Could not probe minimum capture size for pixelformat DVX5
0:00:06.141978449 1 0x7feda4005ea0 WARN v4l2 gstv4l2object.c:3057:gst_v4l2_object_get_nearest_size:<nvv4l2decoder0:sink> Unable to try format: Unknown error -1
0:00:06.141985696 1 0x7feda4005ea0 WARN v4l2 gstv4l2object.c:2948:gst_v4l2_object_probe_caps_for_format:<nvv4l2decoder0:sink> Could not probe maximum capture size for pixelformat DVX5
0:00:06.142002130 1 0x7feda4005ea0 WARN v4l2 gstv4l2object.c:3057:gst_v4l2_object_get_nearest_size:<nvv4l2decoder0:sink> Unable to try format: Unknown error -1
0:00:06.142009210 1 0x7feda4005ea0 WARN v4l2 gstv4l2object.c:2942:gst_v4l2_object_probe_caps_for_format:<nvv4l2decoder0:sink> Could not probe minimum capture size for pixelformat DVX4
0:00:06.142013363 1 0x7feda4005ea0 WARN v4l2 gstv4l2object.c:3057:gst_v4l2_object_get_nearest_size:<nvv4l2decoder0:sink> Unable to try format: Unknown error -1
0:00:06.142020169 1 0x7feda4005ea0 WARN v4l2 gstv4l2object.c:2948:gst_v4l2_object_probe_caps_for_format:<nvv4l2decoder0:sink> Could not probe maximum capture size for pixelformat DVX4
0:00:06.142033980 1 0x7feda4005ea0 WARN v4l2 gstv4l2object.c:3057:gst_v4l2_object_get_nearest_size:<nvv4l2decoder0:sink> Unable to try format: Unknown error -1
0:00:06.142044934 1 0x7feda4005ea0 WARN v4l2 gstv4l2object.c:2942:gst_v4l2_object_probe_caps_for_format:<nvv4l2decoder0:sink> Could not probe minimum capture size for pixelformat MPG4
0:00:06.142053644 1 0x7feda4005ea0 WARN v4l2 gstv4l2object.c:3057:gst_v4l2_object_get_nearest_size:<nvv4l2decoder0:sink> Unable to try format: Unknown error -1
0:00:06.142059319 1 0x7feda4005ea0 WARN v4l2 gstv4l2object.c:2948:gst_v4l2_object_probe_caps_for_format:<nvv4l2decoder0:sink> Could not probe maximum capture size for pixelformat MPG4
0:00:06.142071690 1 0x7feda4005ea0 WARN v4l2 gstv4l2object.c:3057:gst_v4l2_object_get_nearest_size:<nvv4l2decoder0:sink> Unable to try format: Unknown error -1
0:00:06.142081045 1 0x7feda4005ea0 WARN v4l2 gstv4l2object.c:2942:gst_v4l2_object_probe_caps_for_format:<nvv4l2decoder0:sink> Could not probe minimum capture size for pixelformat MPG2
0:00:06.142090026 1 0x7feda4005ea0 WARN v4l2 gstv4l2object.c:3057:gst_v4l2_object_get_nearest_size:<nvv4l2decoder0:sink> Unable to try format: Unknown error -1
0:00:06.142098835 1 0x7feda4005ea0 WARN v4l2 gstv4l2object.c:2948:gst_v4l2_object_probe_caps_for_format:<nvv4l2decoder0:sink> Could not probe maximum capture size for pixelformat MPG2
0:00:06.142110297 1 0x7feda4005ea0 WARN v4l2 gstv4l2object.c:3057:gst_v4l2_object_get_nearest_size:<nvv4l2decoder0:sink> Unable to try format: Unknown error -1
0:00:06.142119266 1 0x7feda4005ea0 WARN v4l2 gstv4l2object.c:2942:gst_v4l2_object_probe_caps_for_format:<nvv4l2decoder0:sink> Could not probe minimum capture size for pixelformat H265
0:00:06.142127418 1 0x7feda4005ea0 WARN v4l2 gstv4l2object.c:3057:gst_v4l2_object_get_nearest_size:<nvv4l2decoder0:sink> Unable to try format: Unknown error -1
0:00:06.142136371 1 0x7feda4005ea0 WARN v4l2 gstv4l2object.c:2948:gst_v4l2_object_probe_caps_for_format:<nvv4l2decoder0:sink> Could not probe maximum capture size for pixelformat H265
0:00:06.142148369 1 0x7feda4005ea0 WARN v4l2 gstv4l2object.c:3057:gst_v4l2_object_get_nearest_size:<nvv4l2decoder0:sink> Unable to try format: Unknown error -1
0:00:06.142157523 1 0x7feda4005ea0 WARN v4l2 gstv4l2object.c:2942:gst_v4l2_object_probe_caps_for_format:<nvv4l2decoder0:sink> Could not probe minimum capture size for pixelformat VP90
0:00:06.142168633 1 0x7feda4005ea0 WARN v4l2 gstv4l2object.c:3057:gst_v4l2_object_get_nearest_size:<nvv4l2decoder0:sink> Unable to try format: Unknown error -1
0:00:06.142177776 1 0x7feda4005ea0 WARN v4l2 gstv4l2object.c:2948:gst_v4l2_object_probe_caps_for_format:<nvv4l2decoder0:sink> Could not probe maximum capture size for pixelformat VP90
0:00:06.142189143 1 0x7feda4005ea0 WARN v4l2 gstv4l2object.c:3057:gst_v4l2_object_get_nearest_size:<nvv4l2decoder0:sink> Unable to try format: Unknown error -1
0:00:06.142197809 1 0x7feda4005ea0 WARN v4l2 gstv4l2object.c:2942:gst_v4l2_object_probe_caps_for_format:<nvv4l2decoder0:sink> Could not probe minimum capture size for pixelformat VP80
0:00:06.142203183 1 0x7feda4005ea0 WARN v4l2 gstv4l2object.c:3057:gst_v4l2_object_get_nearest_size:<nvv4l2decoder0:sink> Unable to try format: Unknown error -1
0:00:06.142209811 1 0x7feda4005ea0 WARN v4l2 gstv4l2object.c:2948:gst_v4l2_object_probe_caps_for_format:<nvv4l2decoder0:sink> Could not probe maximum capture size for pixelformat VP80
0:00:06.142218774 1 0x7feda4005ea0 WARN v4l2 gstv4l2object.c:3057:gst_v4l2_object_get_nearest_size:<nvv4l2decoder0:sink> Unable to try format: Unknown error -1
0:00:06.142227511 1 0x7feda4005ea0 WARN v4l2 gstv4l2object.c:2942:gst_v4l2_object_probe_caps_for_format:<nvv4l2decoder0:sink> Could not probe minimum capture size for pixelformat H264
0:00:06.142236113 1 0x7feda4005ea0 WARN v4l2 gstv4l2object.c:3057:gst_v4l2_object_get_nearest_size:<nvv4l2decoder0:sink> Unable to try format: Unknown error -1
0:00:06.142245383 1 0x7feda4005ea0 WARN v4l2 gstv4l2object.c:2948:gst_v4l2_object_probe_caps_for_format:<nvv4l2decoder0:sink> Could not probe maximum capture size for pixelformat H264
0:00:06.142567766 1 0x7feda4005ea0 WARN v4l2 gstv4l2object.c:3057:gst_v4l2_object_get_nearest_size:<nvv4l2decoder0:src> Unable to try format: Unknown error -1
0:00:06.142581222 1 0x7feda4005ea0 WARN v4l2 gstv4l2object.c:2942:gst_v4l2_object_probe_caps_for_format:<nvv4l2decoder0:src> Could not probe minimum capture size for pixelformat NM12
0:00:06.142592345 1 0x7feda4005ea0 WARN v4l2 gstv4l2object.c:3057:gst_v4l2_object_get_nearest_size:<nvv4l2decoder0:src> Unable to try format: Unknown error -1
0:00:06.142602096 1 0x7feda4005ea0 WARN v4l2 gstv4l2object.c:2948:gst_v4l2_object_probe_caps_for_format:<nvv4l2decoder0:src> Could not probe maximum capture size for pixelformat NM12
0:00:06.142609231 1 0x7feda4005ea0 WARN v4l2 gstv4l2object.c:2395:gst_v4l2_object_add_interlace_mode:0x7fed88074860 Failed to determine interlace mode
2022-11-24 14:32:55,019 [savant.mytest] [INFO] Added source 1071
0:00:06.270271725 1 0x7feda4005ea0 WARN v4l2videodec gstv4l2videodec.c:1847:gst_v4l2_video_dec_decide_allocation:<nvv4l2decoder0> Duration invalid, not setting latency
0:00:06.270305027 1 0x7feda4005ea0 WARN v4l2bufferpool gstv4l2bufferpool.c:1082:gst_v4l2_buffer_pool_start:<nvv4l2decoder0:pool:src> Uncertain or not enough buffers, enabling copy threshold
0:00:06.271082420 1 0x7fed88031520 WARN v4l2bufferpool gstv4l2bufferpool.c:1533:gst_v4l2_buffer_pool_dqbuf:<nvv4l2decoder0:pool:src> Driver should never set v4l2_buffer.field to ANY
There is no problem with detector in the log. May be something wrong with your source? I need details.
1) What source adapter do you use? Did you try another adapter or another source, eg. video fragment? Try to use images source adapter, specify location with folder with only one jpeg image. Or try to use your source with working module.
2) Try to run the module in debug mode LOGLEVEL=DEBUG ./scripts/run_module...
. It will give more information about the moment the pipeline hangs
3) Specify device you are using: dGPU/Jetson, nvidia driver version
2022-11-25 11:05:12,801 [savant.config.module_config] [INFO] Configure pipeline elements...
2022-11-25 11:05:12,801 [savant.config.module_config] [DEBUG] Getting element/elem_type/elem_ver from element config {'element': 'nvinfer@detector', 'name': 'yolonew', 'model': {'format': 'onnx', 'model_file': 'yolov5s.onnx', 'input': {'layer_name': 'images', 'shape': [3, 640, 640], 'scale_factor': 0.003921569790691137}, 'batch_size': 1, 'output': {'layer_names': ['output0'], 'num_detected_classes': 80, 'converter': {'module': 'savant.converter.yolo', 'class_name': 'TensorToBBoxConverter'}, 'objects': [{'class_id': 1, 'label': 'car'}]}}}
2022-11-25 11:05:12,801 [savant.config.module_config] [DEBUG] Parsed short notation nvinfer@detector, result element="nvinfer" elem_type="detector" elem_ver="None"
2022-11-25 11:05:12,804 [savant.deepstream.nvinfer.element_config] [INFO] Element nvinfer@detector:v1(name=yolonew): Path to the model files has been set to "/models/yolonew".
2022-11-25 11:05:12,816 [savant.deepstream.nvinfer.element_config] [INFO] Element nvinfer@detector:v1(name=yolonew): Model engine file has been set to "yolov5s.onnx_b1_gpu0_fp16.engine".
2022-11-25 11:05:12,822 [savant.deepstream.nvinfer.element_config] [INFO] Element nvinfer@detector:v1(name=yolonew): Resulting configuration file "/models/yolonew/yolov5s_config_savant.txt" has been saved.
2022-11-25 11:05:12,826 [savant.config.module_config] [DEBUG] Getting element/elem_type/elem_ver from element config {'element': 'drawbin', 'module': 'savant.deepstream.drawbin', 'class_name': 'NvDsDrawBin', 'element_type': 'detector'}
2022-11-25 11:05:12,826 [savant.config.module_config] [DEBUG] Parsed full definiton, result element="drawbin" elem_type="detector" elem_ver="None"
2022-11-25 11:05:12,832 [savant.config.module_config] [INFO] Module configuration is complete.
2022-11-25 11:05:12,873 [savant.config.module_config] [DEBUG] Module config:
name: mytest
parameter_init_priority:
environment: 20
etcd: 10
parameters:
log_level: DEBUG
model_path: /models
download_path: /downloads
dynamic_parameter_storage: etcd
etcd_config:
endpoints:
- host: etcd-server
port: 2379
timeout: 15
frame_width: 1920
frame_height: 1080
fps_period: 10000
queue_maxsize: 100
output_frame:
codec: jpeg
batch_size: 1
dynamic_parameters: {}
pipeline:
source:
element: zeromq_source_bin
element_type: null
version: v1
name: null
properties:
socket: ipc:///tmp/zmq-sockets/input-video.ipc
socket_type: REP
bind: true
dynamic_properties: {}
elements:
- element: nvinfer
element_type: detector
version: v1
name: yolonew
properties:
config-file-path: /models/yolonew/yolov5s_config_savant.txt
dynamic_properties: {}
model:
local_path: /models/yolonew
remote: null
model_file: yolov5s.onnx
batch_size: 1
precision: FP16
input:
object: frame
layer_name: images
shape:
- 3
- 640
- 640
maintain_aspect_ratio: false
scale_factor: 0.003921569790691137
offsets:
- 0.0
- 0.0
- 0.0
color_format: RGB
preprocess_object_meta: null
preprocess_object_tensor: null
object_min_width: null
object_min_height: null
object_max_width: null
object_max_height: null
output:
layer_names:
- output0
converter:
module: savant.converter.yolo
class_name: TensorToBBoxConverter
kwargs: null
objects:
- class_id: 1
label: car
selector:
module: savant.selector
class_name: BBoxSelector
kwargs:
confidence_threshold: 0.5
nms_iou_threshold: 0.5
num_detected_classes: 80
selection_type: 1
format: ONNX
config_file: null
int8_calib_file: null
engine_file: yolov5s.onnx_b1_gpu0_fp16.engine
proto_file: null
custom_config_file: null
mean_file: null
label_file: null
tlt_model_key: null
gpu_id: 0
interval: 0
custom_lib_path: null
engine_create_func_name: null
parse_bbox_func_name: null
- module: savant.deepstream.drawbin
class_name: NvDsDrawBin
kwargs: null
element: drawbin
element_type: detector
version: v1
name: null
properties:
module: savant.deepstream.drawbin
class: NvDsDrawBin
location: ''
kwargs: '{}'
dynamic_properties: {}
location: ''
sink:
- element: zeromq_sink
element_type: null
version: v1
name: null
properties:
socket: ipc:///tmp/zmq-sockets/output-video.ipc
socket_type: PUB
bind: true
dynamic_properties: {}
2022-11-25 11:05:12,873 [savant.utils.sink_factories] [DEBUG] Initializing ZMQ sink: socket ipc:///tmp/zmq-sockets/output-video.ipc, type PUB, bind True. 0:00:00.034213329 76 0x560a084df860 WARN default gstsf.c:97:gst_sf_create_audio_template_caps: format 0x120000: 'AVR (Audio Visual Research)' is not mapped 0:00:00.034263606 76 0x560a084df860 WARN default gstsf.c:97:gst_sf_create_audio_template_caps: format 0x180000: 'CAF (Apple Core Audio File)' is not mapped 0:00:00.034268390 76 0x560a084df860 WARN default gstsf.c:97:gst_sf_create_audio_template_caps: format 0x100000: 'HTK (HMM Tool Kit)' is not mapped 0:00:00.034272825 76 0x560a084df860 WARN default gstsf.c:97:gst_sf_create_audio_template_caps: format 0xc0000: 'MAT4 (GNU Octave 2.0 / Matlab 4.2)' is not mapped 0:00:00.034283799 76 0x560a084df860 WARN default gstsf.c:97:gst_sf_create_audio_template_caps: format 0xd0000: 'MAT5 (GNU Octave 2.1 / Matlab 5.0)' is not mapped 0:00:00.034289735 76 0x560a084df860 WARN default gstsf.c:97:gst_sf_create_audio_template_caps: format 0x210000: 'MPC (Akai MPC 2k)' is not mapped 0:00:00.034298458 76 0x560a084df860 WARN default gstsf.c:97:gst_sf_create_audio_template_caps: format 0xe0000: 'PVF (Portable Voice Format)' is not mapped 0:00:00.034306504 76 0x560a084df860 WARN default gstsf.c:97:gst_sf_create_audio_template_caps: format 0x160000: 'SD2 (Sound Designer II)' is not mapped 0:00:00.034314239 76 0x560a084df860 WARN default gstsf.c:97:gst_sf_create_audio_template_caps: format 0x190000: 'WVE (Psion Series 3)' is not mapped 0:00:00.093027532 76 0x560a084df860 WARN GST_PLUGIN_LOADING gstplugin.c:792:_priv_gst_plugin_load_file_for_registry: module_open failed: libavcodec.so.58: cannot open shared object file: No such file or directory
(gst-plugin-scanner:76): GStreamer-WARNING **: 11:05:12.975: Failed to load plugin '/usr/lib/x86_64-linux-gnu/gstreamer-1.0/libgstchromaprint.so': libavcodec.so.58: cannot open shared object file: No such file or directory
0:00:00.119628101 76 0x560a084df860 WARN ladspa gstladspa.c:507:plugin_init:
(gst-plugin-scanner:76): GStreamer-WARNING **: 11:05:13.170: Failed to load plugin '/usr/lib/x86_64-linux-gnu/gstreamer-1.0/deepstream/libnvdsgst_inferserver.so': libtritonserver.so: cannot open shared object file: No such file or directory 0:00:00.341705023 76 0x560a084df860 WARN GST_PLUGIN_LOADING gstplugin.c:792:_priv_gst_plugin_load_file_for_registry: module_open failed: libucs.so.0: cannot open shared object file: No such file or directory
(gst-plugin-scanner:76): GStreamer-WARNING **: 11:05:13.224: Failed to load plugin '/usr/lib/x86_64-linux-gnu/gstreamer-1.0/deepstream/libnvdsgst_ucx.so': libucs.so.0: cannot open shared object file: No such file or directory 0:00:00.401579389 76 0x560a084df860 WARN GST_PLUGIN_LOADING gstplugin.c:792:_priv_gst_plugin_load_file_for_registry: module_open failed: librivermax.so.0: cannot open shared object file: No such file or directory
(gst-plugin-scanner:76): GStreamer-WARNING **: 11:05:13.284: Failed to load plugin '/usr/lib/x86_64-linux-gnu/gstreamer-1.0/deepstream/libnvdsgst_udp.so': librivermax.so.0: cannot open shared object file: No such file or directory
2022-11-25 11:05:13,470 [savant.zeromq_src.zeromq_src+zeromqsrc0] [DEBUG] Setting property "socket" to "ipc:///tmp/zmq-sockets/input-video.ipc".
2022-11-25 11:05:13,471 [savant.zeromq_src.zeromq_src+zeromqsrc0] [DEBUG] Setting property "socket-type" to "REP".
2022-11-25 11:05:13,471 [savant.zeromq_src.zeromq_src+zeromqsrc0] [DEBUG] Setting property "bind" to "True".
2022-11-25 11:05:13,473 [savant.mytest] [DEBUG] Added element zeromq_source_bin:v1(name=source): PipelineElement(element='zeromq_source_bin', element_type=None, version='v1', name='source', properties={'socket': 'ipc:///tmp/zmq-sockets/input-video.ipc', 'socket_type': 'REP', 'bind': True, 'convert-jpeg-to-rgb': False, 'max-parallel-streams': 64}, dynamic_properties={}).
2022-11-25 11:05:13,550 [savant.mytest] [DEBUG] Added element nvstreammux:v1(name=muxer): PipelineElement(element='nvstreammux', element_type=None, version='v1', name='muxer', properties={'width': 1920, 'height': 1080, 'batch-size': 1, 'buffer-pool-size': 4, 'batched-push-timeout': 2000, 'live-source': False, 'interpolation-method': 6}, dynamic_properties={}).
2022-11-25 11:05:13,550 [savant.mytest] [INFO] Pipeline frame processing parameters: {'width': 1920, 'height': 1080, 'batch-size': 1, 'buffer-pool-size': 4, 'batched-push-timeout': 2000, 'live-source': False, 'interpolation-method': 6}.
2022-11-25 11:05:13,587 [savant.mytest] [DEBUG] Added element nvinfer@detector:v1(name=yolonew): ModelElement(element='nvinfer', element_type='detector', version='v1', name='yolonew', properties={'config-file-path': '/models/yolonew/yolov5s_config_savant.txt'}, dynamic_properties={}, model=NvInferDetector(local_path='/models/yolonew', remote=None, model_file='yolov5s.onnx', batch_size=1, precision=<ModelPrecision.FP16: 2>, input=NvInferModelInput(object='frame', layer_name='images', shape=[3, 640, 640], maintain_aspect_ratio=False, scale_factor=0.003921569790691137, offsets=[0.0, 0.0, 0.0], color_format=<ModelColorFormat.RGB: 0>, preprocess_object_meta=None, preprocess_object_tensor=None, object_min_width=None, object_min_height=None, object_max_width=None, object_max_height=None), output=NvInferObjectModelOutput(layer_names=['output0'], converter=PyFunc(module='savant.converter.yolo', class_name='TensorToBBoxConverter', kwargs=None), objects=[NvInferObjectModelOutputObject(class_id=1, label='car', selector=PyFunc(module='savant.selector', class_name='BBoxSelector', kwargs={'confidence_threshold': 0.5, 'nms_iou_threshold': 0.5}))], num_detected_classes=80, selection_type=1), format=<NvInferModelFormat.ONNX: 2>, config_file=None, int8_calib_file=None, engine_file='yolov5s.onnx_b1_gpu0_fp16.engine', proto_file=None, custom_config_file=None, mean_file=None, label_file=None, tlt_model_key=None, gpu_id=0, interval=0, custom_lib_path=None, engine_create_func_name=None, parse_bbox_func_name=None)).
2022-11-25 11:05:13,602 [savant.mytest] [DEBUG] Added in/out probes to element nvinfer@detector:v1(name=yolonew).
2022-11-25 11:05:13,604 [savant.mytest] [DEBUG] Added element drawbin@detector:v1(name=drawbin+drawbin0): DrawBinElement(module='savant.deepstream.drawbin', class_name='NvDsDrawBin', kwargs=None, element='drawbin', element_type='detector', version='v1', name='drawbin+drawbin0', properties={'module': 'savant.deepstream.drawbin', 'class': 'NvDsDrawBin', 'location': '', 'kwargs': '{}'}, dynamic_properties={}, location='').
2022-11-25 11:05:13,606 [savant.mytest] [DEBUG] Added element nvstreamdemux:v1(name=demuxer): PipelineElement(element='nvstreamdemux', element_type=None, version='v1', name='demuxer', properties={}, dynamic_properties={}).
2022-11-25 11:05:13,607 [savant.gstreamer.runner] [INFO] Starting pipeline mytest<NvDsPipeline>: zeromq_source_bin:v1(name=source) -> nvstreammux:v1(name=muxer) -> nvinfer@detector:v1(name=yolonew) -> drawbin@detector:v1(name=drawbin+drawbin0) -> nvstreamdemux:v1(name=demuxer)
...
2022-11-25 11:05:13,629 [savant.gstreamer.runner] [DEBUG] Adding signal watch and connecting callbacks...
2022-11-25 11:05:13,629 [savant.gstreamer.runner] [DEBUG] Setting pipeline to READY...
2022-11-25 11:05:13,782 [savant.avro_video_demux] [DEBUG] Start eviction loop
2022-11-25 11:05:13,782 [savant.avro_video_demux] [DEBUG] Waiting 60.0 seconds for the next eviction loop
2022-11-25 11:05:13,782 [savant.gstreamer.runner] [DEBUG] Setting pipeline to PLAYING...
0:00:02.812048924 1 0x22f9e70 INFO nvinfer gstnvinfer.cpp:646:gst_nvinfer_logger:
1 OUTPUT kHALF output0 25200x85
0:00:02.850690529 1 0x22f9e70 INFO nvinfer gstnvinfer.cpp:646:gst_nvinfer_logger:
The log looks as usual, it seems that something like "Preparing output for frame ..." should follow. I still don't understand what the problem could be. Usually when there are problems in postprocessing or with the model graph, there are corresponding messages in the logs.
If you can share your model or similar in onnx format with me, I can try to reproduce the problem.
ok. These are the steps I have followerd when I was creating my onnx file.
git clone https://github.com/ultralytics/yolov5 # clone
cd yolov5
pip install -r requirements.txt
python export.py --weights yolov5s.pt --include torchscript onnx --half --device 0
And I used the output onnx model. My source video resolution is 1280x720
We currently cannot work correctly with fp16 in python converters. Use fp32 precision for ONNX model and delegate conversion to fp16 to TRT. In this case, the model engine will run at fp16 (and infer quickly), but the output tensor for python converter will be at fp32.
Try with python export.py --weights yolov5s.pt --include onnx
Thank, this solved my problem. I got it.
I'm using yolov5s.onnx as model file for detector. However, It gets error about parsing bboxes. I've tried to use convertor but I could not achieve that. What is wrong here? How can I make savant to draw bbox on output with yolov5 model?