openvinotoolkit / open_model_zoo

Pre-trained Deep Learning models and demos (high quality and extremely fast)
https://docs.openvino.ai/latest/model_zoo.html
Apache License 2.0
4.07k stars 1.37k forks source link

Custom Yolov4 IR infer at c++ sample gives me Segmentation fault (core dumped) #3150

Closed AyaNasser96 closed 2 years ago

AyaNasser96 commented 2 years ago

I've trained my own custom yolov4 (darknet) on 1 class, then convert it to onnx successfully and to IR as well, now I want to infer the IR model with c++ samples, I used those samples ( multi_channel_object_detection_demo_yolov3 and object_detection_demo) and both gives me the following error (Segmentation fault (core dumped)

image

The run code is:

./multi_channel_object_detection_demo_yolov3-m /home/aya/Deployment_project/Plate_IR/Plate_yolov4_1_3_416_416_static.xml -d CPU -i /home/aya/Deployment_project/test_samples/1.jpg

OR

./object_detection_demo -m /home/aya/Deployment_project/Plate_IR/Plate_yolov4_1_3_416_416_static.xml -d CPU -i /home/aya/Deployment_project/test_samples/1.jpg -at yolo -labels /home/aya/Deployment_project/Models/Plate/Plate.txt

I use this docker image: https://hub.docker.com/r/openvino/ubuntu20_dev my cpu is: Intel® Xeon(R) CPU E5-2695 v4 @ 2.10GHz × 6

Is there anything I do wrong ?

vladimir-dudnik commented 2 years ago

@AyaNasser96 OMZ demos are built in assumption that OMZ models will be used at inference. As you said, you have train your own model and change configuration to have a single detection class, what is different from OMZ public yolo-v4 model. Note, OMZ object_detection_demo provides a couple optional command line params, to specify non default anchors and anchors mask for custom yolo-v4 models. not sure if that is enough to cover case when you have different number of classes, but at least you can try. If this will not help, you'll need to modify demo to implement your custom model output decoding.

AyaNasser96 commented 2 years ago

@vladimir-dudnik thank you, I'll try to investigate more in that I've also notice something, that I'm already have a different weights (that is included in .bin file) so how could I provide those weights to the demo ?

vladimir-dudnik commented 2 years ago

@AyaNasser96 demo accept path to model XML file (if you run it with models in OpenVINO IR form, note, it is also possible to apply model in ONNX form, OpenVINO can accept ONNX models directly) and assumption is that .BIN file has the same name as .XML file and is located in the same folder. In this case OpenVINO will find .BIN file automatically. Although, OpenVINO API allow to specify both, path to .XML and path to .BIN file within call to ie.ReadModel function, it is demo simplification, we ask for path to .XML file only.

AyaNasser96 commented 2 years ago

@vladimir-dudnik This explains A LOT XD Thank you ^^, I'll investigate more and get back to you ^^

AyaNasser96 commented 2 years ago

@vladimir-dudnik

I try python sample (object_detection_demo.py) once with -at yolo, then I found there's a yolov4 option, and in both ways I got post-processing problem.

I ran the model with python sample as follows:

python3 object_detection_demo.py -d CPU -m /home/aya/Deployment_project/Plate_IR/Plate_yolov4_1_3_416_416_static.xml -at yolo --labels /home/aya/Deployment_project/Plate_IR/Plate.txt -i /home/aya/Deployment_project/test_samples/1.jpg

and I got this error: image

  File "/opt/intel/openvino_2021.4.752/deployment_tools/open_model_zoo/demos/common/python/models/yolo.py", line 214, in postprocess
    out_blob.shape = layer_params[0]
ValueError: cannot reshape array of size 42588 into shape (1,8112,1,4)

image

File "object_detection_demo.py", line 331, in main results = detector_pipeline.get_result(next_frame_id_to_show) File "/opt/intel/openvino_2021.4.752/deployment_tools/open_model_zoo/demos/common/python/pipelines/async_pipeline.py", line 132, in get_result return self.model.postprocess(raw_result, preprocess_meta), meta File "/opt/intel/openvino_2021.4.752/deployment_tools/open_model_zoo/demos/common/python/models/yolo.py", line 215, in postprocess detections += self._parse_yolo_region(out_blob, meta['resized_shape'], layer_params[1], self.threshold) File "/opt/intel/openvino_2021.4.752/deployment_tools/open_model_zoo/demos/common/python/models/yolo.py", line 272, in _parse_yolo_region for row, col, n in np.ndindex(params.sides[0], params.sides[1], params.num): IndexError: list index out of range

This is related to post-processing as I've 1 class as output, do you know how to edit the post-process at the c++ sample ?

vladimir-dudnik commented 2 years ago

@fzhar @ivikhrev @akorobeinikov @anzhella-pankratova could you please help with answering this ?

@AyaNasser96 meanwhile, you may review one of the previous discussions regarding custom yolo-v4 model #2598, you also may find useful to review this topic on OpenVINO forum (discussed how to convert custom yolo-v3 with num classes 3 to IR)

AyaNasser96 commented 2 years ago

Sorry for late reply,

I want to point that in addition to the above error msg, when I choose the yolov4 I found that it reads the number of classes as 3544 not 1 class !!

I think this means it's not access the output layer ! but when I print the output layer name I found that it points to confs and boxes !! So I really dnt know where the problem is ? how it could say that the number of classes is 3544 ?

image

AyaNasser96 commented 2 years ago

I also found my yolov4 config contains the follow: classes=1 num=9

Which is different from the openvino yolov4 python code, that has: num=3 "as a hard coded !!"

So i Change those parameters according to my config file and the num of classes now is 1178 !!

Do I need to check something else in the config file ? **Despite am already use this config to convert the yolov4 to onnx then to IR !! I thought it should save this information!

eaidova commented 2 years ago

Which is different from the openvino yolov4 python code, that has: num=3 "as a hard coded !!"

it is not a mistake, we use a slightly different approach for num estimation. num in darknet config contains sum for all outputs (3 for each out in yolo v4 gives 9) and then divide on a number of outs in the postprocessing code, we use already post processed num per output layer setting.

Unfortunately, due to lack native darknet support for conversion to IR, different paths for conversion may lead to different results. I mean that model output can be represented in different formats when you use different implementation for conversion yolo models to different frameworks (caffe, pytorch, tf, onnx, e.t.c.). Our demo code focused on darknet-tf-openvino path, while you mention that you convert model to onnx. Probably, it is the root cause of your problems with model integration to demos. Our OMZ yolo v4 support based on https://github.com/david8862/keras-YOLOv3-model-set we convert yolo v4 weights using darknet config and script from this repo to tf saved model. You can find parameters used for conversion in our pre-convert script (possibly you need to modify path to your config and weights) https://github.com/openvinotoolkit/open_model_zoo/blob/master/models/public/yolo-v4-tf/pre-convert.py after that convert to IR. Could you please try this way? I think it should help to align your model with our demo.

AyaNasser96 commented 2 years ago

@eaidova Many thanks to your amazing conclusion, I managed to convert it and ran it successfully with the python sample and c++ samples

(python samples sometimes give me overflow error so I used c++ samples) Python error: image

but this model is aimed to detect the license plates, I were tested it with images, when I use the same images right now with c++ the program is ran successfully but there's no bounding box drawn on the image !!!

image

Where could be the problem, could it be just opencv problem or the model can not fetch the correct object bbx ?

eaidova commented 2 years ago

How do you convert model to IR? Our OMZ demos assume that model expects BGR image in [0, 255] range. It means that some preprocessing options should be included to MO command line (if I right remember for yolo models standard is RGB image [0, 1]): --scale 255 --reverse_input_channels

AyaNasser96 commented 2 years ago

@eaidova Thank you, I forget to do that scaling step, this worked perfectly ^^ I will submit a detailed solution here and then close the issue.

AyaNasser96 commented 2 years ago

The conclusion is:

for example to:

image

Run C++ with the model

cd /opt/intel/openvino_2021.4.752/deployment_tools/open_model_zoo/demos/build/intel64/Debug

./object_detection_demo -d CPU -i /home/aya/Deployment_project/test_samples/1.jpg -m /home/aya/Deployment_project/openvino-assest/plate_IR/yolov4.xml -at yolo -labels /home/aya/Deployment_project/Plate_IR/Plate.txt -o out2.jpg