SYSTRAN / faster-whisper

Faster Whisper transcription with CTranslate2
MIT License
11.35k stars 948 forks source link

OpenVINO support request #612

Open albertomercurio opened 9 months ago

albertomercurio commented 9 months ago

I already use it for Home Assistant speech recognition. I was wondering if the support for OpenVINO could be implemented since it would be a great feature in terms of performance for all those systems without a dedicated GPU (almost all for the Home Automation).

For example, I can see a huge difference in terms of performance in the Frigate project, when the object recognition is performed. In the multithreaded CPU model, I have very high CPU usage (like faster-whisper with relatively large models), but the OpenVINO model is way faster, keeping the CPU use very low.

I was wondering if this implementation would be easy to do.

Furthermore, I figured out that an OpenVINO version seems to be already done, but it is derived from whisper and not faster-whisper.

ILG2021 commented 9 months ago

I have tried openvino in whisper.cpp and it is not faster than faster-whisper, may be the same speed.

albertomercurio commented 9 months ago

Ok, I thought it was faster, like in Frigate. Thanks for the trial.

0x3333 commented 8 months ago

I’m not into ML and NN but, would Faster-whisper on openvino faster than whisper.cpp on openvino, given that faster-whisper is faster than whisper.cpp? Or these as incompatible?

hlevring commented 7 months ago

I added a similar request over here https://github.com/OpenNMT/CTranslate2/issues/1603 since I figured that might be the right place.