Closed divdaisymuffin closed 2 years ago
A general reminder - there is a guide on what parameters to set for YOLO converters: https://github.com/openvinotoolkit/dlstreamer_gst/wiki/How-to-create-model-proc-file I highly recommend to familiarize with this if you want to resolve such problems yourself without long waiting for an answer.
@divdaisymuffin In your case: MP for head_yolov4_tiny_608_default_anchors_mask_012_FP32:
{
"json_schema_version": "2.0.0",
"input_preproc": [],
"output_postproc": [
{
"converter": "tensor_to_bbox_yolo_v3",
"iou_threshold": 0.4,
"classes": 1,
"anchors": [ 10.0, 14.0, 23.0, 27.0, 37.0, 58.0, 81.0, 82.0, 135.0, 169.0, 344.0, 319.0],
"masks": [ 3, 4, 5, 0, 1, 2],
"bbox_number_on_cell": 3,
"cells_number": 19,
"labels": ["face"]
}
]
}
You do not need set RGB color-space, because IR model has
@dsmertin we have tried this as well but still same issue.
Hi,
In this case, please send:
GST_DEBUG=3
environment variable;The result of using the model and model-proc file from my previous comment seemed accurate to me, and it is definitely better if I use your model-proc.
Thanks, Dmitry
@dsmertin
Please find the details:
We have tried all possible permutations and combinations with model proc already. logs.txt
@divdaisymuffin The master of Smart cities has recently been updated to 21.6.1 with a later openvino version - I would recommend updating to that (independent of this issue) as that will be easier to support.
Hello @divdaisymuffin
I didn't see the issue with the following set up:
I ran with the following:
Browser Output:
Original video(i.e result_with_original_model.avi, looks very similar to above):
Logs from analytics container with GST_DEBUG=3:
PipelineStatus(avg_fps=11.636832583642384, avg_pipeline_latency=2.644501693345405, elapsed_time=228.95639276504517, id=1, start_time=1638559079.130393, state=<State.RUNNING: 2>)
PipelineStatus(avg_fps=11.674381560870563, avg_pipeline_latency=2.6444818880147434, elapsed_time=231.96294951438904, id=1, start_time=1638559079.130393, state=<State.RUNNING: 2>)
0:03:54.812535050 1 0x2b5dde0 WARN libav gstavviddec.c:972:gst_ffmpegviddec_get_buffer2:<avdec_h264-2> Couldn't get codec frame !
0:03:54.812560446 1 0x2b5dde0 ERROR libav :0:: get_buffer() failed
0:03:54.812569607 1 0x2b5dde0 ERROR libav :0:: thread_get_buffer() failed
0:03:54.812576033 1 0x2b5dde0 ERROR libav :0:: decode_slice_header error
0:03:54.812582871 1 0x2b5dde0 ERROR libav :0:: no frame!
0:03:54.812591673 1 0x2b5dde0 WARN libav gstavviddec.c:2019:gst_ffmpegviddec_handle_frame:<avdec_h264-2> Failed to send data for decoding
Could you try with the same settings or compare the setup with your failing scenario?
@vidyasiv Thanks for the support, yeah it worked out with us as well with changes suggested. The same thing we tried with this issue https://github.com/OpenVisualCloud/Smart-City-Sample/issues/802 but its not working can you please look into it.
@divdaisymuffin Based on this update we will close this issue.
Hi, @nnshah1
We were able to run successfully YOLOv4-tiny model with image size 416416, with a model-proc. Due to certain accuracy issues, we have to increase image size to 608608 and then when we ran it on Smart-City-Sample with the same model-proc it is not able to run and it gave model-proc issues. But when, while conversion to IR when we downscale 608608 to 416416, and then deployed to smart-city with same model proc it worked. So, please suggest do we need to do some changes in model-proc?
Both models and their model-proc, you can find here model and model proc