tobiashuttinger / openai-whisper-realtime

A quick experiment to achieve almost realtime transcription using Whisper.
MIT License
185 stars 28 forks source link

Is this real-time for `medium` and `large` models? #1

Closed jayavanth closed 2 years ago

jayavanth commented 2 years ago

Trying to get whisper running on M1 Mac so I can't test it right now. Is it still real-time on medium and large models?

tobiashuttinger commented 2 years ago

That entirely depends on how fast your hardware can do the inference. On an M1 (Max) the base model might run well. Everything above is still too computationally expensive.

jayavanth commented 2 years ago

Thanks! I got it running on M1 Max and looks like it's not fast enough. It's predicting but with a delay. Also I don't think it's using the GPU because % usage is 0. Pretty impressive though that with just CPU I can see it working