Open albertomercurio opened 9 months ago
I have tried openvino in whisper.cpp and it is not faster than faster-whisper, may be the same speed.
Ok, I thought it was faster, like in Frigate. Thanks for the trial.
I’m not into ML and NN but, would Faster-whisper on openvino faster than whisper.cpp on openvino, given that faster-whisper is faster than whisper.cpp? Or these as incompatible?
I added a similar request over here https://github.com/OpenNMT/CTranslate2/issues/1603 since I figured that might be the right place.
I already use it for Home Assistant speech recognition. I was wondering if the support for OpenVINO could be implemented since it would be a great feature in terms of performance for all those systems without a dedicated GPU (almost all for the Home Automation).
For example, I can see a huge difference in terms of performance in the Frigate project, when the object recognition is performed. In the multithreaded CPU model, I have very high CPU usage (like faster-whisper with relatively large models), but the OpenVINO model is way faster, keeping the CPU use very low.
I was wondering if this implementation would be easy to do.
Furthermore, I figured out that an OpenVINO version seems to be already done, but it is derived from whisper and not faster-whisper.