Closed jayavanth closed 2 years ago
That entirely depends on how fast your hardware can do the inference. On an M1 (Max) the base model might run well. Everything above is still too computationally expensive.
Thanks! I got it running on M1 Max and looks like it's not fast enough. It's predicting but with a delay. Also I don't think it's using the GPU because % usage is 0. Pretty impressive though that with just CPU I can see it working
Trying to get whisper running on M1 Mac so I can't test it right now. Is it still real-time on medium and large models?