Closed zhouzhou0322 closed 2 years ago
Support for Yolo-V5 was added in the latest release: https://github.com/dlstreamer/dlstreamer/releases/tag/2022.2-release.
You can find a model-proc file now as part of the samples, here: https://github.com/dlstreamer/dlstreamer/blob/master/samples/gstreamer/model_proc/public/yolo-v5.json
Have a look at this repo as a base: https://github.com/ultralytics/yolov5
clone it, export the model (see e.g. "https://github.com/ultralytics/yolov5/blob/master/export.py") and you can use e.g. gvadetect
for object-detection with the model-proc file like other object-detection pipelines.
@brmarkus Hi , I converted both yolov5 coco and custom to openvino xmls and bins which can be imported by the official detect.py to do inference. The results look good. But the same converted models cannot run in dlstreamer 2022.2 release. Can you pls help to have a look the post?
Support for Yolo-V5 was added in the latest release: https://github.com/dlstreamer/dlstreamer/releases/tag/2022.2-release.
You can find a model-proc file now as part of the samples, here: https://github.com/dlstreamer/dlstreamer/blob/master/samples/gstreamer/model_proc/public/yolo-v5.json
Have a look at this repo as a base: https://github.com/ultralytics/yolov5 clone it, export the model (see e.g. "https://github.com/ultralytics/yolov5/blob/master/export.py") and you can use e.g.
gvadetect
for object-detection with the model-proc file like other object-detection pipelines.
Thanks for the info, I am using the model-proc file provided and trying to implement the yolov5m v6.2 model into dlstreamer by using the lastest dlstreamer-gpu container provided.(trying to run on ats-m gpu) Here's what I got. Any clue?
gst-launch-1.0 filesrc location=$video_file ! decodebin ! vaapipostproc ! gvadetect model-instance-id=inf0 model=$model_path labels=$label_path pre-process-backend=vaapi-surface-sharing batch_size=32 device=GPU inference-interval=2 ! gvatrack tracking-type=short-term-imageless ! gvaclassify model-instance-id=cla0 model=$model_path_cls inference-region=roi-list pre-process-backend=vaapi-surface-sharing batch_size=32 device=GPU ! queue ! gvafpscounter ! fakesink sync=false
pipeline is something looks like this, images is
docker pull intel/dlstreamer:2022.2.0-ubuntu20-gpu419.40-devel
You mentioned you use "ats-m gpu" - can I assume you can use any other model (e.g. from OpenModelZoo) successfully on this GPU, using DL-Streamer?
Above you mentioned "can be imported by the official detect.py to do inference. The results look good" have you tested detect.py
in the same DL-Streamer Docker container, or outside the container on your HOST?
successfully on this GPU, using DL-Streamer?
Sorry, for some reasons I have to stick with the yolov5. Please help me if you can. Trying to adapt intel GPU to our workflow. GPU support is crucially needed at the moment for dlstreamer.
Above you mentioned yolov5 is supported in openvino 2022.2, have it been validated on intel ats-m gpu? Or it only works on cpu at the moment.
OpenVINO lists ATS in its release-notes, see "https://www.intel.com/content/www/us/en/developer/articles/release-notes/openvino-relnotes.html#inpage-nav-1". DL-Streamer's latest release 2022.2, see "https://github.com/dlstreamer/dlstreamer/releases/tag/2022.2-release", mentions "Intel® Data Center GPU Flex Series 140 and 170".
Can you try a quick test with any simple model (like ssd-mobilenet, face-detection-adas-0001) in your environment on your GPU? Is your environment inside a Docker container or "natively" on your host?
When you say "detect.py to do inference. The results look good", have you performed the inference on the dGPU?
Unfortunately I don't have an ATS-M dGPU at hand.
OpenVINO lists ATS in its release-notes, see "https://www.intel.com/content/www/us/en/developer/articles/release-notes/openvino-relnotes.html#inpage-nav-1". DL-Streamer's latest release 2022.2, see "https://github.com/dlstreamer/dlstreamer/releases/tag/2022.2-release", mentions "Intel® Data Center GPU Flex Series 140 and 170".
Can you try a quick test with any simple model (like ssd-mobilenet, face-detection-adas-0001) in your environment on your GPU? Is your environment inside a Docker container or "natively" on your host?
When you say "detect.py to do inference. The results look good", have you performed the inference on the dGPU?
Unfortunately I don't have an ATS-M dGPU at hand.
I did try the yolov4 one, it works on dgpu.
When you say "detect.py to do inference. The results look good", have you performed the inference on the dGPU?
This is not my word, some guy same have issues with yolov5 came to this issue post his question lol .
Can you share your model-files and/or description how to get/re-train/export/convert the model (which format do you use, ONNX or IR (XML&BIN)?) you use, in which version (like which commit-base of the Yolo repo, which version of Python), using which tools and command-lines, please?
Then engineering could have a look and try to reproduce.
Not using a dGPU but embedded GPU on ADL-S I'm able to run Yolo5l and Yolo5n - but haven't used your variant "yolov5m v6.2".
yes, but Our models and details are not convenient to go public. Can I send you through this email (markus.broghammer@intel.com).
Wanted to clarify first, whether your environment is working and everything is needed (for OpenVINO, for DL-Streamer and for the underlying HW).
If you get YoloV4 working - then I wanted to encourage you first, to try the "original" YoloV5 (e.g. YoloV5l or YoloV5n).
If the original YoloV5 models are working, then your model seem to be customized and might require more in-depth analysis from OpenVINO (https://github.com/openvinotoolkit/openvino/issues).
Wanted to clarify first, whether your environment is working and everything is needed (for OpenVINO, for DL-Streamer and for the underlying HW).
If you get YoloV4 working - then I wanted to encourage you first, to try the "original" YoloV5 (e.g. YoloV5l or YoloV5n).
If the original YoloV5 models are working, then your model seem to be customized and might require more in-depth analysis from OpenVINO (https://github.com/openvinotoolkit/openvino/issues).
I confirm the yolov4 work on using dlstreamer gpu docker images provided from your official dockerhub. The yolov5 model is not a variant. It is the latest yolov5m model from the official yolov5 github repo.
@akwrobel can you comment on YoloV5 tested on dGPU (ATS-M)? I don't have a dGPU at hand.
@zhouzhou0322 Did you tested on CPU? does it work well?
Putting ATS-M aside first...
I was able to get Yolov5m v6.2 working on DLStreamer 2022.2 if I convert to onnx with "--imgsz 416" and with Conv_272, 291 and 310 during OpenVINO IR conversion.
However, when I tried with -imgsz 640, gvadetect gave no output, neither GPU nor CPU
@zhouzhou0322 , how did you convert your model?
@zhouzhou0322 ,
intel/dlstreamer:2022.2.0-ubuntu20-gpu815-devel
(this image comes with model proc:/opt/intel/dlstreamer/samples/gstreamer/model_proc/public/yolo-v5.json, labels:/opt/intel/dlstreamer/samples/labels/coco_80cl.txt)
Yet to try out with ATSM dGPU which will require this image: intel/dlstreamer:2022.2.0-ubuntu20-gpu419.40-devel
Update: Tested the same with intel/dlstreamer:2022.2.0-ubuntu20-gpu419.40-devel
on ATSM-M1 host with gpu pipeline and it is working.
@alexlamfromhome with -imgsz 640, could you try this? Please update or exclude cells_number
in model-proc to get results per : https://dlstreamer.github.io/dev_guide/how_to_create_model_proc_file.html#how-to-model-proc-ex2-output-postproc where cells_number = input_layer_size//32. Either remove "cells_number : 13" or set "cells_number: 20" for size 640. We should probably update the model-proc to not set cells_number by default to accommodate various input sizes.
Putting ATS-M aside first...
I was able to get Yolov5m v6.2 working on DLStreamer 2022.2 if I convert to onnx with "--imgsz 416" and with Conv_272, 291 and 310 during OpenVINO IR conversion.
However, when I tried with -imgsz 640, gvadetect gave no output, neither GPU nor CPU
@zhouzhou0322 , how did you convert your model?
Looks like I didn't specify the img_size. the default one is 640. I did't get the output from gvadetect as well.
@alexlamfromhome with -imgsz 640, could you try this? Please update or exclude
cells_number
in model-proc to get results per : https://dlstreamer.github.io/dev_guide/how_to_create_model_proc_file.html#how-to-model-proc-ex2-output-postproc where cells_number = input_layer_size//32. Either remove "cells_number : 13" or set "cells_number: 20" for size 640. We should probably update the model-proc to not set cells_number by default to accommodate various input sizes.
Thanks @vidyasiv! "cells_number: 20" did the trick!
@alexlamfromhome with -imgsz 640, could you try this? Please update or exclude
cells_number
in model-proc to get results per : https://dlstreamer.github.io/dev_guide/how_to_create_model_proc_file.html#how-to-model-proc-ex2-output-postproc where cells_number = input_layer_size//32. Either remove "cells_number : 13" or set "cells_number: 20" for size 640. We should probably update the model-proc to not set cells_number by default to accommodate various input sizes.
"cells_number: 20" works for me as well. Cheers!
Nice! Thanks for spotting this detail @nnshah1
Any sample and model_proc file I can refer about implementing yolov5 in dlstreamer.