Closed santiago-afonso closed 6 months ago
Yes, this project is really fantastic, and I've spent quite a bit of time getting it working via CUDA. It does exactly what I want, but latency is a challenge. Even with a very new laptop with an RTX4090 mobile, the wait for transcription makes it borderline when you're a reasonable typist anyway.
Hi there,
Thanks for your comments! I haven't had the chance to look too much into the Windows port linked above, but I did recently return to this project and switched to using faster-whisper
from the original OpenAI model. The local transcription should be a lot faster now! :)
Your UX of having the ability to Whisper-dictate anywhere is awesome. I'd love to see it combined with this project's engine which runs Whisper locally on your own GPU (even low-mid range ones like a 1050) https://github.com/Const-me/Whisper and does not require users to install Python, which is a great hurdle.
And congrats on the amazing project!