Open divdaisymuffin opened 3 years ago
Yes that should be possible - will provide an example. Question: should the results for the detections be combined and sent together (one set per frame) - or seperated?
@nnshah1, yeah actually I am able to see person model is sending data and only bounding boxes for it is visible, what ideally I want is to have person getting detected then, face is getting detected and face detection matrix can be given to recognition models. So yeah can the results be combined and sent together on each frame? Please help with it.
To clarify - do you want to do face detection only within the person detection region? Or to do them independently. That is do you want to have faces and people detected seperatly, or to do people detection -> face detection(within detected people) -> recognition (within faces).
Actually I want both, 1. people detection -> face detection(within detected people) -> recognition (within faces)-->mqtt. for other purpose I want live stream --> one branch-->person detection-->mqtt --> second branch --> face detection-->age-genderrecognition-->mqtt
In the second one - is it suffidience to have live stream --> person detection --> face detection --> age-gender-recognition --> mqtt (i.e. both branches combining to single mqtt endpoint?)
yes, but will it send data if person is standing with his back visible and not face, in that case will data will be sent to mqtt?
yes
yeah then its great for me
Please find template below for each usecase
gst-launch-1.0 uridecodebin uri=<input file> ! gvadetect model=person-detection.xml model-proc=person-detection.json ! gvadetect model=face-detection.xml model-proc=face-detection.json object-class=person inference-region=roi-list ! gvaclassify model=age-gender-recognition-retail-0013.xml model-proc=age-gender-recognition-retail-0013.json ! gvametaconvert ! gvametapublish ! fakesink
"template":"rtspsrc udp-buffer-size=212992 name=source ! queue ! rtph264depay ! h264parse ! video/x-h264 ! tee name=t ! queue ! decodebin ! videoconvert name=\"videoconvert\" ! gvadetect ie-config=CPU_BIND_THREAD=NO model=\"{models[person_detection_2020R2][1][network]}\" model-proc=\"{models[person_detection_2020R2][1][proc]}\" name=\"detection1\" threshold=0.50 ! gvadetect ie-config=CPU_BIND_THREAD=NO model=\"{models[face_detection_adas][1][network]}\" model-proc=\"{models[face_detection_adas][1][proc]}\" name=\"detection\" threshold=0.50 object-class=person inference-region=roi-list ! gvaclassify model=\"{models[age-gender-recognition-retail-0013][1][network]}\" model-proc=\"{models[age-gender-recognition-retail-0013][1][proc]}\" name=\"recognition\" model-instance-id=recognition ! gvametaconvert name=\"metaconvert\" ! queue ! gvametapublish name=\"destination\" ! appsink name=appsink t. ! splitmuxsink max-size-time=60000000000 name=\"splitmuxsink\"
gst-launch-1.0 uridecodebin uri=<input file> ! gvadetect model=person-detection.xml model-proc=person-detection.json ! gvadetect model=face-detection.xml model-proc=face-detection.json ! gvametaconvert ! gvametapublish ! fakesink
"template":"rtspsrc udp-buffer-size=212992 name=source ! queue ! rtph264depay ! h264parse ! video/x-h264 ! tee name=t ! queue ! decodebin ! videoconvert name=\"videoconvert\" ! gvadetect ie-config=CPU_BIND_THREAD=NO model=\"{models[person_detection_2020R2][1][network]}\" model-proc=\"{models[person_detection_2020R2][1][proc]}\" name=\"detection1\" threshold=0.50 ! gvadetect ie-config=CPU_BIND_THREAD=NO model=\"{models[face_detection_adas][1][network]}\" model-proc=\"{models[face_detection_adas][1][proc]}\" name=\"detection\" threshold=0.50 ! gvametaconvert name=\"metaconvert\" ! queue ! gvametapublish name=\"destination\" ! appsink name=appsink t. ! splitmuxsink max-size-time=60000000000 name=\"splitmuxsink\"
Thanks @nnshah1 and @tthakkal. Let me try these.
@tthakkal @nnshah1 I want to run2 detection models together, one is head detection model and another is a model that should take roi of first model and run on specific roi only which will be passes by first detection model.
Based on your previous suggestion of using roi-list I have tried but that is not working for me.
Please see the pipeline that I am trying to run.
"template":"rtspsrc udp-buffer-size=212992 name=source ! queue ! rtph264depay ! h264parse ! video/x-h264 ! tee name=t ! queue ! decodebin ! videoconvert name=\"videoconvert\" ! video/x-raw,format=BGRx ! queue leaky=upstream ! gvadetect ie-config=CPU_BIND_THREAD=NO model=\"{models[head_yolov4_tiny_608to416_default_anchors_mask_012_heatmap_INT8][1][network]}\" model-proc=\"{models[head_yolov4_tiny_608to416_default_anchors_mask_012_heatmap_INT8][1][proc]}\" name=\"detection\" threshold=0.40 object-class=person inference-region=roi-list ! gvadetect model=\"{models[age_gender_new_75][1][network]}\" model-proc=\"{models[age_gender_new_75][1][proc]}\" name=\"detection2\" model-instance-id=detection2 ! gvametaconvert name=\"metaconvert\" ! gvametapublish name=\"destination\" ! appsink name=appsink t. ! splitmuxsink max-size-time=60500000000 name=\"splitmuxsink\"",
It stucks with below error:
object-class
and inference-region
should be part of the second detection. please update and try.
rtspsrc udp-buffer-size=212992 name=source ! queue ! rtph264depay ! h264parse ! video/x-h264 ! tee name=t ! queue ! decodebin ! videoconvert name=\"videoconvert\" ! video/x-raw,format=BGRx ! queue leaky=upstream ! gvadetect ie-config=CPU_BIND_THREAD=NO model=\"{models[head_yolov4_tiny_608to416_default_anchors_mask_012_heatmap_INT8][1][network]}\" model-proc=\"{models[head_yolov4_tiny_608to416_default_anchors_mask_012_heatmap_INT8][1][proc]}\" name=\"detection\" threshold=0.40 ! gvadetect model=\"{models[age_gender_new_75][1][network]}\" model-proc=\"{models[age_gender_new_75][1][proc]}\" name=\"detection2\" model-instance-id=detection2 object-class=person inference-region=roi-list ! gvametaconvert name=\"metaconvert\" ! gvametapublish name=\"destination\" ! appsink name=appsink t. ! splitmuxsink max-size-time=60500000000 name=\"splitmuxsink\"
@divdaisymuffin if it is head detection please set right object-class
based on label mentioned in model-proc from first detection.
@tthakkal Tried the shared pipeline as well, still error remains same and in model-proc the class name is "person" only
{ "json_schema_version": "2.0.0", "input_preproc": [], "output_postproc": [ { "converter": "tensor_to_bbox_yolo_v3", "iou_threshold": 0.4, "classes": 1, "anchors": [ 10.0, 14.0, 23.0, 27.0, 37.0, 58.0, 81.0, 82.0, 135.0, 169.0, 344.0, 319.0 ], "masks": [ 3, 4, 5, 0, 1, 2 ], "bbox_number_on_cell": 3, "cells_number": 13, "labels": [ "person" ] } ]
Try with gst-launch by exec into container and see if it works.
gst-launch-1.0 rtspsrc location=<rtsp source> udp-buffer-size=212992 name=source ! queue ! rtph264depay ! h264parse ! video/x-h264 ! queue ! decodebin ! videoconvert ! video/x-raw,format=BGRx ! queue leaky=upstream ! gvadetect ie-config=CPU_BIND_THREAD=NO model=<path to head_yolov4_tiny_608to416_default_anchors_mask_012_heatmap_INT8 model xml> model-proc=<path to head_yolov4_tiny_608to416_default_anchors_mask_012_heatmap_INT8 model-proc json> name=detection threshold=0.40 ! gvadetect model=<path to age_gender_new_75 model xml> model-proc=<path to age_gender_new_75 json> name=detection2 model-instance-id=detection2 object-class=person inference-region=roi-list ! gvametaconvert ! gvametapublish ! fakesink
for any further debug, setup a meeting.
@divdaisymuffin Which version are you using? If the element doesn't support the property it's probably a DL Streamer version mismatch.
Can I use two or more than one gvadetect elements. I Actually want to use person detection alongwith face detection, I tried something like below but it didnt worked.
{ "name": "object_detection", "version": 2, "type": "GStreamer", "template":"rtspsrc udp-buffer-size=212992 name=source ! queue ! rtph264depay ! h264parse ! video/x-h264 ! tee name=t ! queue ! decodebin ! videoconvert name=\"videoconvert\" ! video/x-raw,format=BGRx ! queue leaky=upstream ! t. ! gvadetect ie-config=CPU_BIND_THREAD=NO model=\"{models[person_detection_2020R2][1][network]}\" model-proc=\"{models[person_detection_2020R2][1][proc]}\" name=\"detection1\" threshold=0.50 ! t. ! gvadetect ie-config=CPU_BIND_THREAD=NO model=\"{models[face_detection_adas][1][network]}\" model-proc=\"{models[face_detection_adas][1][proc]}\" name=\"detection\" threshold=0.50 ! gvaclassify model=\"{models[age-gender-recognition-retail-0013][1][network]}\" model-proc=\"{models[age-gender-recognition-retail-0013][1][proc]}\" name=\"recognition\" model-instance-id=recognition ! gvametaconvert name=\"metaconvert\" ! queue ! gvametapublish name=\"destination\" ! appsink name=appsink t. ! splitmuxsink max-size-time=60000000000 name=\"splitmuxsink\"", "description": "Object Detection Pipeline", "parameters": { "type" : "object", "properties" : { "inference-interval": { "element":"detection", "type": "integer", "minimum": 0, "maximum": 4294967295 }, "cpu-throughput-streams": { "element":"detection", "type": "string" }, "n-threads": { "element":"videoconvert", "type": "integer" }, "nireq": { "element":"detection", "type": "integer", "minimum": 1, "maximum": 64 }, "recording_prefix": { "type":"string", "default":"recording" } } } }