Open scottleibrand opened 1 year ago
Maybe not, in light of https://github.com/danielgross/teleprompter/pull/1 ?
@scottleibrand, Yeah, I believe with the smaller models the whisper.cpp
implementation is good at running at real-time speed. I ran it on CPU only and got pretty good performance, I believe this is very doable without GPU support.
To use Whisper for an app like this, I think we’d first want it to add M1 GPU support, as running even the tiny model on CPU is barely above 1x speed for transcription.
https://github.com/openai/whisper/pull/382 isn’t yet merged, and it’s unclear to me what is required to get it working.
Thoughts?