-
I think no need to generate new OV IR. Seems there are now updated models here:
https://huggingface.co/Intel/whisper.cpp-openvino-models/tree/main
https://huggingface.co/twdragon/whisper.cpp-openv…
-
Hi,
I have an Intel Core Ultra 9 Processor with NPU.
It was not possible to start recognition for the NPU. Everything works for the CPU.
This my code:
string ocrPath = publishDir…
aropb updated
1 month ago
-
### Context
This task regards enabling tests for **dolly-v2-3b**. You can find more details under openvino_notebooks [LLM question answering README.md](https://github.com/openvinotoolkit/openvino_not…
-
What is the reason 0.7.0 and 0.7.1 are yanked on crates.io? I could not find any information on that. This forces me to update the openvino dependency, which currently is hard for me since it doesnt s…
-
### Context
This task regards enabling tests for **red-pajama-3b-instruct**. You can find more details under openvino_notebooks [LLM question answering README.md](https://github.com/openvinotoolkit/o…
-
### Search before asking
- [X] I have searched the YOLOv8 [issues](https://github.com/ultralytics/ultralytics/issues) and found no similar bug report.
### YOLOv8 Component
Export
### Bug
When us…
-
# ONNXRuntime inference works well on Raspberry Pi 4 with Intel NCS2: step by step setup with OpenVINO Execution Provider - PUT Vision Lab
[https://putvision.github.io/article/raspberry-onnxruntime…
-
Hello dear developer,
I can't use openvino in the jetson xavier nx device, what tools can I use to load your model (xml and bin)
Looking forward to your reply!
avdvg updated
6 months ago
-
### Before Asking
- [X] I have read the [README](https://github.com/meituan/YOLOv6/blob/main/README.md) carefully. 我已经仔细阅读了README上的操作指引。
- [ ] I want to train my custom dataset, and I have read the …
-
according to the manual, i just wanna speed up inference on the CPU via OpenVINO, however got the problem as bellow.
(openvino_conv_env) [root@zaozhuang3L-C6-35 whisper.cpp]# ./main -m models/ggml-ba…