Closed DysinLarshee closed 1 year ago
Have you compiled the models and included them in that directory? Those are not included in frigate
Thanks for the prompt reply. Yes, I have converted the model both yolox_tiny/yolov8 to FP16 precision IR model, placed the .xml and labelmap file in the /media/frigate directory
cc @aeozyalcin
How are you obtaining/converting your yolox_tiny/yolov8 models? It looks like openvino is not liking the models. It's odd that it says "Converting input model", which suggests to me that the problem might lie with the models.
How are you obtaining/converting your yolox_tiny/yolov8 models? It looks like openvino is not liking the models. It's odd that it says "Converting input model", which suggests to me that the problem might lie with the models.
Actually, for yolov8n, I have tried to use the colab script that created by you and only changed the python version in the "Convert ONNX model to OpenVino" section from 3.8 to 3.9 in order to make it to be able to run and get the converted model.
For yolox_tiny, I have installed the OpenVINO Development Tools in another ubuntu VM, and ran the following command
mo --input_model yolox_tiny.onnx --compress_to_fp16 --input_shape [1,3,416,416]
to obtain the converted model.
Both models yolov8n and yolox_tiny resulted in the same error code that I found in the frigate log, both without camera stream showing up and detection is not working. Maybe I did the model conversion not correctly?
I changed the collab notebook. Can you try generating a yolov8 model again, and try that new model with Frigate?
It is finally working now, thanks for updating the collab notebook. I have done a few things in yesterday:
1, regenerating the yolov8n model using your updated collab notebook.
2, updated the Proxmox version from 6.2 to 7.4, and the process also updated the Proxmox kernel from 5.4 to 5.15. After this update, in lspci command, it displays the detailed iGPU name (UHD610) instead of general intel vga device before.
3, I also updated the kernel of the Ubuntu VM from 5.4 to 5.15 also.
I am having the same issue trying to setup yolox_tiny.
Model conversion to FP16 has been successful, using
omz_converter --name yolox-tiny
xml and label file have been copied in an accessible folder.
========== Converting yolox-tiny to ONNX
Conversion to ONNX command: /usr/bin/python3 -- /usr/local/lib/python3.9/dist-packages/openvino/model_zoo/internal_scripts/pytorch_to_onnx.py --model-path=/usr/local/lib/python3.9/dist-packages/openvino/model_zoo/models/public/yolox-tiny --model-path=/root/public/yolox-tiny --model-name=create_model --import-module=model '--model-param=weights=r"/root/public/yolox-tiny/yolox_tiny.pth"' --input-shape=1,3,416,416 --input-names=images --output-names=output --output-file=/root/public/yolox-tiny/yolox-tiny.onnx
============= Diagnostic Run torch.onnx.export version 2.0.1+cu117 =============
verbose: False, log level: Level.ERROR
======================= 0 NONE 0 NOTE 0 WARNING 0 ERROR ========================
ONNX check passed successfully.
========== Converting yolox-tiny to IR (FP16)
Conversion command: /usr/bin/python3 -- /usr/local/bin/mo --framework=onnx --data_type=FP16 --output_dir=/root/public/yolox-tiny/FP16 --model_name=yolox-tiny --input=images '--mean_values=images[123.675,116.28,103.53]' '--scale_values=images[58.395,57.12,57.375]' --reverse_input_channels --output=output --input_model=/root/public/yolox-tiny/yolox-tiny.onnx '--layout=images(NCHW)' '--input_shape=[1, 3, 416, 416]'
[ WARNING ] Use of deprecated cli option --data_type detected. Option use in the following releases will be fatal.
Check for a new version of Intel(R) Distribution of OpenVINO(TM) toolkit here https://software.intel.com/content/www/us/en/develop/tools/openvino-toolkit/download.html?cid=other&source=prod&campid=ww_2023_bu_IOTG_OpenVINO-2022-3&content=upg_all&medium=organic or on https://github.com/openvinotoolkit/openvino
[ INFO ] The model was converted to IR v11, the latest model format that corresponds to the source DL framework input/output format. While IR v11 is backwards compatible with OpenVINO Inference Engine API v1.0, please use API v2.0 (as of 2022.1) to take advantage of the latest improvements in IR v11.
Find more information about API v2.0 and IR v11 at https://docs.openvino.ai/latest/openvino_2_0_transition_guide.html
[ SUCCESS ] Generated IR version 11 model.
[ SUCCESS ] XML file: /root/public/yolox-tiny/FP16/yolox-tiny.xml
[ SUCCESS ] BIN file: /root/public/yolox-tiny/FP16/yolox-tiny.bin
The errors I have in my log are exactly the same as OP.
EDIT: I just created a yolov8 model using @aeozyalcin colab and also got the same error:
2023-05-11 08:27:08.160692654 [2023-05-11 10:27:08] frigate.app INFO : Starting Frigate (0.12.0-bc16ad1)
2023-05-11 08:27:08.192834786 [2023-05-11 10:27:08] frigate.config WARNING : Customizing more than a detector model path is unsupported.
2023-05-11 08:27:08.198703630 [2023-05-11 10:27:08] frigate.app INFO : Creating directory: /tmp/cache
2023-05-11 08:27:08.200963801 [2023-05-11 10:27:08] peewee_migrate INFO : Starting migrations
2023-05-11 08:27:08.205537574 [2023-05-11 10:27:08] peewee_migrate INFO : There is nothing to migrate
2023-05-11 08:27:08.224000695 [2023-05-11 10:27:08] frigate.app INFO : Output process started: 300
2023-05-11 08:27:08.231060140 [2023-05-11 10:27:08] detector.ov INFO : Starting detection process: 299
2023-05-11 08:27:08.241083810 [2023-05-11 10:27:08] frigate.app INFO : Camera processor started for Parking_cam: 304
2023-05-11 08:27:08.244106245 Process detector:ov:
2023-05-11 08:27:08.245734199 Traceback (most recent call last):
2023-05-11 08:27:08.246335791 File "/usr/lib/python3.9/multiprocessing/process.py", line 315, in _bootstrap
2023-05-11 08:27:08.246338433 self.run()
2023-05-11 08:27:08.246704498 [2023-05-11 10:27:08] frigate.app INFO : Camera processor started for Shed_cam: 305
2023-05-11 08:27:08.246854245 File "/usr/lib/python3.9/multiprocessing/process.py", line 108, in run
2023-05-11 08:27:08.246856581 self._target(*self._args, **self._kwargs)
2023-05-11 08:27:08.246980952 File "/opt/frigate/frigate/object_detection.py", line 98, in run_detector
2023-05-11 08:27:08.246983287 object_detector = LocalObjectDetector(detector_config=detector_config)
2023-05-11 08:27:08.247105717 File "/opt/frigate/frigate/object_detection.py", line 52, in __init__
2023-05-11 08:27:08.247107884 self.detect_api = create_detector(detector_config)
2023-05-11 08:27:08.247214681 File "/opt/frigate/frigate/detectors/__init__.py", line 24, in create_detector
2023-05-11 08:27:08.247216679 return api(detector_config)
2023-05-11 08:27:08.247326151 File "/opt/frigate/frigate/detectors/plugins/openvino.py", line 26, in __init__
2023-05-11 08:27:08.247328352 self.ov_model = self.ov_core.read_model(detector_config.model.path)
2023-05-11 08:27:08.247449968 RuntimeError: Check 'false' failed at ../src/frontends/common/src/frontend.cpp:53:
2023-05-11 08:27:08.247451909 Converting input model
EDIT EDIT: It was the stupidest thing ever. I copied the xml and the model description, but was missing the other .mapping and .bin files. With all the files there, I have been able to use yolox-tiny and yolov8 without any issue
ImportError Traceback (most recent call last)
10 frames
/usr/local/lib/python3.10/dist-packages/pandas/core/indexing.py in
ImportError: cannot import name 'is_exact_shape_match' from 'pandas.core.indexers' (/usr/local/lib/python3.10/dist-packages/pandas/core/indexers/init.py)
I am on latest Macos and Chrome
Any idea of what I am doing wrong ? Many Thanks
Philippe
I just ran the collab notebook, and it's working fine. It doesn't matter what OS/browser you are running on your PC, the notebook runs on Google servers.
Thanks aeozyalcin for checking on your side. After a couple of restarts/resets I managed to get it work.
However I had a lot of dependencies warning/errors during the openvino conversion. The three files (xml, mapping and bin) were generated in the end. Should I ignore the warnings/errors ?
Many thanks for your help - Philippe
I'd recommend trying the generated model in Frigate and see if it works. It should work fine, as long as you make sure all 3 files associated with the model are accessible by Frigate.
Thanks aeozyalcin. This is what I have done. I have left my holiday house, were the camera setup sits, last night. I will resume testing when I get back there, about two weeks from now, and report back. Philippe
Does anyone know why this export command doesn't work?
yolo export model=yolov8m.pt format=openvino
I get the same error log as OP:
frigate | 2023-06-15 02:41:29.667373515 [2023-06-15 02:41:29] frigate.app INFO : Capture process started for test6: 628
frigate | 2023-06-15 02:41:29.670187573 [2023-06-15 02:41:29] frigate.app INFO : Capture process started for test7: 634
frigate | 2023-06-15 02:41:29.674017359 Process detector:ov:
frigate | 2023-06-15 02:41:29.674172035 [2023-06-15 02:41:29] frigate.app INFO : Capture process started for test8: 639
frigate | 2023-06-15 02:41:29.674725564 Traceback (most recent call last):
frigate | 2023-06-15 02:41:29.674757260 File "/usr/lib/python3.9/multiprocessing/process.py", line 315, in _bootstrap
frigate | 2023-06-15 02:41:29.674758388 self.run()
frigate | 2023-06-15 02:41:29.674759339 File "/usr/lib/python3.9/multiprocessing/process.py", line 108, in run
frigate | 2023-06-15 02:41:29.674763767 self._target(*self._args, **self._kwargs)
frigate | 2023-06-15 02:41:29.674765889 File "/opt/frigate/frigate/object_detection.py", line 98, in run_detector
frigate | 2023-06-15 02:41:29.674779625 object_detector = LocalObjectDetector(detector_config=detector_config)
frigate | 2023-06-15 02:41:29.674780720 File "/opt/frigate/frigate/object_detection.py", line 52, in __init__
frigate | 2023-06-15 02:41:29.674782241 self.detect_api = create_detector(detector_config)
frigate | 2023-06-15 02:41:29.674783221 File "/opt/frigate/frigate/detectors/__init__.py", line 24, in create_detector
frigate | 2023-06-15 02:41:29.674784198 return api(detector_config)
frigate | 2023-06-15 02:41:29.674785182 File "/opt/frigate/frigate/detectors/plugins/openvino.py", line 26, in __init__
frigate | 2023-06-15 02:41:29.674801690 self.ov_model = self.ov_core.read_model(detector_config.model.path)
frigate | 2023-06-15 02:41:29.674802493 RuntimeError: Check 'false' failed at ../src/frontends/common/src/frontend.cpp:53:
frigate | 2023-06-15 02:41:29.674810813 Converting input model
frigate | 2023-06-15 02:41:29.674811535
frigate | 2023-06-15 02:41:29.677900486 [2023-06-15 02:41:29] frigate.app INFO : Capture process started for test9: 644
frigate | 2023-06-15 02:41:29.680531753 [2023-06-15 02:41:29] frigate.app INFO : Capture process started for test10: 649
frigate | 2023-06-15 02:41:39.082883580 [INFO] Starting go2rtc healthcheck service...
But the collab notebook provided by aeozyalcin works.
Does anyone know why this export command doesn't work?
yolo export model=yolov8m.pt format=openvino
I get the same error log as OP:frigate | 2023-06-15 02:41:29.667373515 [2023-06-15 02:41:29] frigate.app INFO : Capture process started for test6: 628 frigate | 2023-06-15 02:41:29.670187573 [2023-06-15 02:41:29] frigate.app INFO : Capture process started for test7: 634 frigate | 2023-06-15 02:41:29.674017359 Process detector:ov: frigate | 2023-06-15 02:41:29.674172035 [2023-06-15 02:41:29] frigate.app INFO : Capture process started for test8: 639 frigate | 2023-06-15 02:41:29.674725564 Traceback (most recent call last): frigate | 2023-06-15 02:41:29.674757260 File "/usr/lib/python3.9/multiprocessing/process.py", line 315, in _bootstrap frigate | 2023-06-15 02:41:29.674758388 self.run() frigate | 2023-06-15 02:41:29.674759339 File "/usr/lib/python3.9/multiprocessing/process.py", line 108, in run frigate | 2023-06-15 02:41:29.674763767 self._target(*self._args, **self._kwargs) frigate | 2023-06-15 02:41:29.674765889 File "/opt/frigate/frigate/object_detection.py", line 98, in run_detector frigate | 2023-06-15 02:41:29.674779625 object_detector = LocalObjectDetector(detector_config=detector_config) frigate | 2023-06-15 02:41:29.674780720 File "/opt/frigate/frigate/object_detection.py", line 52, in __init__ frigate | 2023-06-15 02:41:29.674782241 self.detect_api = create_detector(detector_config) frigate | 2023-06-15 02:41:29.674783221 File "/opt/frigate/frigate/detectors/__init__.py", line 24, in create_detector frigate | 2023-06-15 02:41:29.674784198 return api(detector_config) frigate | 2023-06-15 02:41:29.674785182 File "/opt/frigate/frigate/detectors/plugins/openvino.py", line 26, in __init__ frigate | 2023-06-15 02:41:29.674801690 self.ov_model = self.ov_core.read_model(detector_config.model.path) frigate | 2023-06-15 02:41:29.674802493 RuntimeError: Check 'false' failed at ../src/frontends/common/src/frontend.cpp:53: frigate | 2023-06-15 02:41:29.674810813 Converting input model frigate | 2023-06-15 02:41:29.674811535 frigate | 2023-06-15 02:41:29.677900486 [2023-06-15 02:41:29] frigate.app INFO : Capture process started for test9: 644 frigate | 2023-06-15 02:41:29.680531753 [2023-06-15 02:41:29] frigate.app INFO : Capture process started for test10: 649 frigate | 2023-06-15 02:41:39.082883580 [INFO] Starting go2rtc healthcheck service...
But the collab notebook provided by aeozyalcin works.
Could you please help me with this @aeozyalcin?
@yannpub, where did you get the mapping and label files? I followed the same instructions (i.e., using omz_downloader and omz_converter), but only yolox-tiny.xml and yolox-tiny.bin were created. I also tried a Colab notebook code. I have no idea if it is correct, but the yolox-tiny model does not exist there. So, all my attempts end in an "Converting input model" error. Another question, did you use the omz_downloader and omz_converter inside the Frigate Docker?
@Duncan1224, I'm also stuck here too (i.e., encountering the "Converting input model" error message).
@yannpub, where did you get the mapping and label files? I followed the same instructions (i.e., using omz_downloader and omz_converter), but only yolox-tiny.xml and yolox-tiny.bin were created. I also tried a Colab notebook code. I have no idea if it is correct, but the yolox-tiny model does not exist there. So, all my attempts end in an "Converting input model" error. Another question, did you use the omz_downloader and omz_converter inside the Frigate Docker?
@Duncan1224, I'm also stuck here too (i.e., encountering the "Converting input model" error message).
Are you saying that using omz_converter to generate yolox models is only generating bin and xml files? There should be 3 files created.
What version of openvino do you have installed?
@aeozyalcin , thank you for the quick response. Unfortunately, after the completion of omz_converter inside the yolox-tiny/FP16 folder, there are only XML and BIN files available. I'm currently using version 2023.0.0 of openvino-dev, installed by: pip install openvino-dev
@aeozyalcin , thank you for the quick response. Unfortunately, after the completion of omz_converter inside the yolox-tiny/FP16 folder, there are only XML and BIN files available. I'm currently using version 2023.0.0 of openvino-dev, installed by: pip install openvino-dev
Ok I suspect this must be a change with the newest version of Openvino. Can you try 2022.3 version of Openvino and report back on whether all 3 files are created?
@aeozyalcin , thank you for the quick response. Unfortunately, after the completion of omz_converter inside the yolox-tiny/FP16 folder, there are only XML and BIN files available. I'm currently using version 2023.0.0 of openvino-dev, installed by: pip install openvino-dev
Ok I suspect this must be a change with the newest version of Openvino. Can you try 2022.3 version of Openvino and report back on whether all 3 files are created?
@aeozyalcin you're right! The problem is the version of openvino-dev. I installed a version before the latest one (i.e., 2022.3.1), and the three files are created in both directories (FP16 and FP32). After putting it on Frigate, it works like a charm! Thank you!
Log file:
2023-06-22 19:18:36.162158769 [2023-06-22 19:18:36] frigate.detectors.plugins.openvino INFO : Model Input Shape: {1, 3, 416, 416}
2023-06-22 19:18:36.162284517 [2023-06-22 19:18:36] frigate.detectors.plugins.openvino INFO : Model Output-0 Shape: {1, 3549, 85}
2023-06-22 19:18:36.162381106 [2023-06-22 19:18:36] frigate.detectors.plugins.openvino INFO : Model has 1 Output Tensors
2023-06-22 19:18:36.162468181 [2023-06-22 19:18:36] frigate.detectors.plugins.openvino INFO : YOLOX model has 80 classes
Config file:
detectors:
ov:
type: openvino
device: GPU
model:
path: /media/frigate/models/yolox-tiny/FP16/yolox-tiny.xml
width: 416
height: 416
input_tensor: nchw
input_pixel_format: bgr
model_type: yolox
labelmap_path: /media/frigate/models/yolox-tiny/FP16/coco_80cl.txt
Inference Speed ~25 ms in a UHD Graphics 770 / 12th Gen Intel(R) Core(TM) i5-12500T.
Same problem and solution for me pip install openvino-dev==2022.3.1
works. The openvino documentation states that the .mapping files are optional and not required (https://community.intel.com/t5/Intel-Distribution-of-OpenVINO/File-names-in-openvino/td-p/1372186), and opening the file with a text editor confirms that it seems to contain just redundant information. Is there a way to get frigate to not use .mapping?
Thanks aeozyalcin. This is what I have done. I have left my holiday house, were the camera setup sits, last night. I will resume testing when I get back there, about two weeks from now, and report back. Philippe
Hi aeozyalcin. Just to confirm that your Yolov8 for frigate Colab notebook works fine, despite de dependency warnings. Thanks for the work. Quite impressed with yolov8. I am in an environment with a lot of moving shadows from trees. I see a lot less false positives than with Yolox. Will keep on testing this yolov8 option.
I am having the same issue trying to setup yolox_tiny.
Model conversion to FP16 has been successful, using
omz_converter --name yolox-tiny
xml and label file have been copied in an accessible folder.
========== Converting yolox-tiny to ONNX Conversion to ONNX command: /usr/bin/python3 -- /usr/local/lib/python3.9/dist-packages/openvino/model_zoo/internal_scripts/pytorch_to_onnx.py --model-path=/usr/local/lib/python3.9/dist-packages/openvino/model_zoo/models/public/yolox-tiny --model-path=/root/public/yolox-tiny --model-name=create_model --import-module=model '--model-param=weights=r"/root/public/yolox-tiny/yolox_tiny.pth"' --input-shape=1,3,416,416 --input-names=images --output-names=output --output-file=/root/public/yolox-tiny/yolox-tiny.onnx ============= Diagnostic Run torch.onnx.export version 2.0.1+cu117 ============= verbose: False, log level: Level.ERROR ======================= 0 NONE 0 NOTE 0 WARNING 0 ERROR ======================== ONNX check passed successfully. ========== Converting yolox-tiny to IR (FP16) Conversion command: /usr/bin/python3 -- /usr/local/bin/mo --framework=onnx --data_type=FP16 --output_dir=/root/public/yolox-tiny/FP16 --model_name=yolox-tiny --input=images '--mean_values=images[123.675,116.28,103.53]' '--scale_values=images[58.395,57.12,57.375]' --reverse_input_channels --output=output --input_model=/root/public/yolox-tiny/yolox-tiny.onnx '--layout=images(NCHW)' '--input_shape=[1, 3, 416, 416]' [ WARNING ] Use of deprecated cli option --data_type detected. Option use in the following releases will be fatal. Check for a new version of Intel(R) Distribution of OpenVINO(TM) toolkit here https://software.intel.com/content/www/us/en/develop/tools/openvino-toolkit/download.html?cid=other&source=prod&campid=ww_2023_bu_IOTG_OpenVINO-2022-3&content=upg_all&medium=organic or on https://github.com/openvinotoolkit/openvino [ INFO ] The model was converted to IR v11, the latest model format that corresponds to the source DL framework input/output format. While IR v11 is backwards compatible with OpenVINO Inference Engine API v1.0, please use API v2.0 (as of 2022.1) to take advantage of the latest improvements in IR v11. Find more information about API v2.0 and IR v11 at https://docs.openvino.ai/latest/openvino_2_0_transition_guide.html [ SUCCESS ] Generated IR version 11 model. [ SUCCESS ] XML file: /root/public/yolox-tiny/FP16/yolox-tiny.xml [ SUCCESS ] BIN file: /root/public/yolox-tiny/FP16/yolox-tiny.bin
The errors I have in my log are exactly the same as OP.
EDIT: I just created a yolov8 model using @aeozyalcin colab and also got the same error:
2023-05-11 08:27:08.160692654 [2023-05-11 10:27:08] frigate.app INFO : Starting Frigate (0.12.0-bc16ad1) 2023-05-11 08:27:08.192834786 [2023-05-11 10:27:08] frigate.config WARNING : Customizing more than a detector model path is unsupported. 2023-05-11 08:27:08.198703630 [2023-05-11 10:27:08] frigate.app INFO : Creating directory: /tmp/cache 2023-05-11 08:27:08.200963801 [2023-05-11 10:27:08] peewee_migrate INFO : Starting migrations 2023-05-11 08:27:08.205537574 [2023-05-11 10:27:08] peewee_migrate INFO : There is nothing to migrate 2023-05-11 08:27:08.224000695 [2023-05-11 10:27:08] frigate.app INFO : Output process started: 300 2023-05-11 08:27:08.231060140 [2023-05-11 10:27:08] detector.ov INFO : Starting detection process: 299 2023-05-11 08:27:08.241083810 [2023-05-11 10:27:08] frigate.app INFO : Camera processor started for Parking_cam: 304 2023-05-11 08:27:08.244106245 Process detector:ov: 2023-05-11 08:27:08.245734199 Traceback (most recent call last): 2023-05-11 08:27:08.246335791 File "/usr/lib/python3.9/multiprocessing/process.py", line 315, in _bootstrap 2023-05-11 08:27:08.246338433 self.run() 2023-05-11 08:27:08.246704498 [2023-05-11 10:27:08] frigate.app INFO : Camera processor started for Shed_cam: 305 2023-05-11 08:27:08.246854245 File "/usr/lib/python3.9/multiprocessing/process.py", line 108, in run 2023-05-11 08:27:08.246856581 self._target(*self._args, **self._kwargs) 2023-05-11 08:27:08.246980952 File "/opt/frigate/frigate/object_detection.py", line 98, in run_detector 2023-05-11 08:27:08.246983287 object_detector = LocalObjectDetector(detector_config=detector_config) 2023-05-11 08:27:08.247105717 File "/opt/frigate/frigate/object_detection.py", line 52, in __init__ 2023-05-11 08:27:08.247107884 self.detect_api = create_detector(detector_config) 2023-05-11 08:27:08.247214681 File "/opt/frigate/frigate/detectors/__init__.py", line 24, in create_detector 2023-05-11 08:27:08.247216679 return api(detector_config) 2023-05-11 08:27:08.247326151 File "/opt/frigate/frigate/detectors/plugins/openvino.py", line 26, in __init__ 2023-05-11 08:27:08.247328352 self.ov_model = self.ov_core.read_model(detector_config.model.path) 2023-05-11 08:27:08.247449968 RuntimeError: Check 'false' failed at ../src/frontends/common/src/frontend.cpp:53: 2023-05-11 08:27:08.247451909 Converting input model
EDIT EDIT: It was the stupidest thing ever. I copied the xml and the model description, but was missing the other .mapping and .bin files. With all the files there, I have been able to use yolox-tiny and yolov8 without any issue
Silly question- where did you get " the other .mapping and .bin files"? I see the bin file from the Colab, but I don't see a .mapping or any other files.
Describe the problem you are having
I am currently running in OpenVINO Detector mode with the default ssd model, which runs fine with hardware acceleration enabled. But when I try switching to yolox_tiny or yolov8, frigate is able to start, but throws some error with the detetor process, and the camera stream is not showing anything on the webui.
Frigate is running in a Ubuntu VM within proxmox6.2
Version
0.12.0-DA3E197
Frigate config file
docker-compose file or Docker CLI command
Relevant log output
Operating system
Proxmox
Install method
Docker Compose
Coral version
Other
Any other information that may be helpful
No response