Closed AyaNasser96 closed 2 years ago
@AyaNasser96 OMZ demos are built in assumption that OMZ models will be used at inference. As you said, you have train your own model and change configuration to have a single detection class, what is different from OMZ public yolo-v4 model. Note, OMZ object_detection_demo provides a couple optional command line params, to specify non default anchors and anchors mask for custom yolo-v4 models. not sure if that is enough to cover case when you have different number of classes, but at least you can try. If this will not help, you'll need to modify demo to implement your custom model output decoding.
@vladimir-dudnik thank you, I'll try to investigate more in that I've also notice something, that I'm already have a different weights (that is included in .bin file) so how could I provide those weights to the demo ?
@AyaNasser96 demo accept path to model XML file (if you run it with models in OpenVINO IR form, note, it is also possible to apply model in ONNX form, OpenVINO can accept ONNX models directly) and assumption is that .BIN file has the same name as .XML file and is located in the same folder. In this case OpenVINO will find .BIN file automatically. Although, OpenVINO API allow to specify both, path to .XML and path to .BIN file within call to ie.ReadModel function, it is demo simplification, we ask for path to .XML file only.
@vladimir-dudnik This explains A LOT XD Thank you ^^, I'll investigate more and get back to you ^^
@vladimir-dudnik
I try python sample (object_detection_demo.py) once with -at yolo, then I found there's a yolov4 option, and in both ways I got post-processing problem.
I ran the model with python sample as follows:
python3 object_detection_demo.py -d CPU -m /home/aya/Deployment_project/Plate_IR/Plate_yolov4_1_3_416_416_static.xml -at yolo --labels /home/aya/Deployment_project/Plate_IR/Plate.txt -i /home/aya/Deployment_project/test_samples/1.jpg
and I got this error:
File "/opt/intel/openvino_2021.4.752/deployment_tools/open_model_zoo/demos/common/python/models/yolo.py", line 214, in postprocess
out_blob.shape = layer_params[0]
ValueError: cannot reshape array of size 42588 into shape (1,8112,1,4)
File "object_detection_demo.py", line 331, in main results = detector_pipeline.get_result(next_frame_id_to_show) File "/opt/intel/openvino_2021.4.752/deployment_tools/open_model_zoo/demos/common/python/pipelines/async_pipeline.py", line 132, in get_result return self.model.postprocess(raw_result, preprocess_meta), meta File "/opt/intel/openvino_2021.4.752/deployment_tools/open_model_zoo/demos/common/python/models/yolo.py", line 215, in postprocess detections += self._parse_yolo_region(out_blob, meta['resized_shape'], layer_params[1], self.threshold) File "/opt/intel/openvino_2021.4.752/deployment_tools/open_model_zoo/demos/common/python/models/yolo.py", line 272, in _parse_yolo_region for row, col, n in np.ndindex(params.sides[0], params.sides[1], params.num): IndexError: list index out of range
This is related to post-processing as I've 1 class as output, do you know how to edit the post-process at the c++ sample ?
@fzhar @ivikhrev @akorobeinikov @anzhella-pankratova could you please help with answering this ?
@AyaNasser96 meanwhile, you may review one of the previous discussions regarding custom yolo-v4 model #2598, you also may find useful to review this topic on OpenVINO forum (discussed how to convert custom yolo-v3 with num classes 3 to IR)
Sorry for late reply,
I want to point that in addition to the above error msg, when I choose the yolov4 I found that it reads the number of classes as 3544 not 1 class !!
I think this means it's not access the output layer ! but when I print the output layer name I found that it points to confs and boxes !! So I really dnt know where the problem is ? how it could say that the number of classes is 3544 ?
I also found my yolov4 config contains the follow: classes=1 num=9
Which is different from the openvino yolov4 python code, that has: num=3 "as a hard coded !!"
So i Change those parameters according to my config file and the num of classes now is 1178 !!
Do I need to check something else in the config file ? **Despite am already use this config to convert the yolov4 to onnx then to IR !! I thought it should save this information!
Which is different from the openvino yolov4 python code, that has: num=3 "as a hard coded !!"
it is not a mistake, we use a slightly different approach for num estimation. num in darknet config contains sum for all outputs (3 for each out in yolo v4 gives 9) and then divide on a number of outs in the postprocessing code, we use already post processed num per output layer setting.
Unfortunately, due to lack native darknet support for conversion to IR, different paths for conversion may lead to different results. I mean that model output can be represented in different formats when you use different implementation for conversion yolo models to different frameworks (caffe, pytorch, tf, onnx, e.t.c.). Our demo code focused on darknet-tf-openvino path, while you mention that you convert model to onnx. Probably, it is the root cause of your problems with model integration to demos. Our OMZ yolo v4 support based on https://github.com/david8862/keras-YOLOv3-model-set we convert yolo v4 weights using darknet config and script from this repo to tf saved model. You can find parameters used for conversion in our pre-convert script (possibly you need to modify path to your config and weights) https://github.com/openvinotoolkit/open_model_zoo/blob/master/models/public/yolo-v4-tf/pre-convert.py after that convert to IR. Could you please try this way? I think it should help to align your model with our demo.
@eaidova Many thanks to your amazing conclusion, I managed to convert it and ran it successfully with the python sample and c++ samples
(python samples sometimes give me overflow error so I used c++ samples) Python error:
but this model is aimed to detect the license plates, I were tested it with images, when I use the same images right now with c++ the program is ran successfully but there's no bounding box drawn on the image !!!
Where could be the problem, could it be just opencv problem or the model can not fetch the correct object bbx ?
How do you convert model to IR? Our OMZ demos assume that model expects BGR image in [0, 255] range. It means that some preprocessing options should be included to MO command line (if I right remember for yolo models standard is RGB image [0, 1]): --scale 255 --reverse_input_channels
@eaidova Thank you, I forget to do that scaling step, this worked perfectly ^^ I will submit a detailed solution here and then close the issue.
The conclusion is:
If you want to work with yolov4 custom model that's converted from another framework to ONNX you'll build your own openvino script (because onnx has different post-processing than the samples offered by openvino has)
You should convert your darknet model to tensorflow then to IR with the following method (for tensorflow => )
Using openvino docker (openvino/ubuntu20_dev - Docker Image | Docker Hub)
mkdir tf-IR
Clone this repo inside tf-IR: git clone https://github.com/david8862/keras-YOLOv3-model-set.git
Set your_model.weights at tf-IR
Set your config: your_model.cfg , at keras-YOLOv3-model-set/cfg
cd /opt/intel/openvino_2021.4.752/deployment_tools/models/public/yolo-v4-tf
we should run pre-converter.py, but at first edit the script at this section:
for example to:
Run the converter
The saved model will be a folder that you specify it’s name in the converter script, inside the tf-IR folder.
Then go to /opt/intel/openvino_2021.4.752/deployment_tools/model_optimizer
Before you run the conversion, make sure you have installed all the Model Optimizer dependencies for TensorFlow 2, reference.
virtualenv --system-site-packages -p python3 ./venv
source ./venv/bin/activate
pip3 install -r requirements_tf2.txt
Run python3 mo.py --saved_model_dir --output_dir --input_shape [1,416,416,3] --model_name yolov4 --scale 255 --reverse_input_channels
Run C++ with the model
cd /opt/intel/openvino_2021.4.752/deployment_tools/open_model_zoo/demos/build/intel64/Debug
./object_detection_demo -d CPU -i /home/aya/Deployment_project/test_samples/1.jpg -m /home/aya/Deployment_project/openvino-assest/plate_IR/yolov4.xml -at yolo -labels /home/aya/Deployment_project/Plate_IR/Plate.txt -o out2.jpg
I've trained my own custom yolov4 (darknet) on 1 class, then convert it to onnx successfully and to IR as well, now I want to infer the IR model with c++ samples, I used those samples ( multi_channel_object_detection_demo_yolov3 and object_detection_demo) and both gives me the following error (Segmentation fault (core dumped)
The run code is:
./multi_channel_object_detection_demo_yolov3-m /home/aya/Deployment_project/Plate_IR/Plate_yolov4_1_3_416_416_static.xml -d CPU -i /home/aya/Deployment_project/test_samples/1.jpg
OR
./object_detection_demo -m /home/aya/Deployment_project/Plate_IR/Plate_yolov4_1_3_416_416_static.xml -d CPU -i /home/aya/Deployment_project/test_samples/1.jpg -at yolo -labels /home/aya/Deployment_project/Models/Plate/Plate.txt
I use this docker image: https://hub.docker.com/r/openvino/ubuntu20_dev my cpu is: Intel® Xeon(R) CPU E5-2695 v4 @ 2.10GHz × 6
Is there anything I do wrong ?