chidiwilliams / buzz

Buzz transcribes and translates audio offline on your personal computer. Powered by OpenAI's Whisper.
https://chidiwilliams.github.io/buzz
MIT License
11.95k stars 899 forks source link

wispher.cpp with large-v3 model only use CPU,do not use GPU #855

Closed Aridea2021 closed 1 month ago

Aridea2021 commented 1 month ago

Dear developer, Many thanks for offering such a useful software, I try it from my friends sharing. now it seems that it can work with only CPU when I choose whishper.cpp large-v3 model. But I'd like to know how to use the GPU to accelerate the transcribe progress.

best regards.

from Dong

raivisdejus commented 1 month ago

If you have a GPU with enough VRAM there is little speed difference among Whisper.cpp and Faster Whisper. So the easiest way to get faster transcription with GPU is to use some other Whisper type.

Aridea2021 commented 1 month ago

If you have a GPU with enough VRAM there is little speed difference among Whisper.cpp and Faster Whisper. So the easiest way to get faster transcription with GPU is to use some other Whisper type.

Hi raivisdejus, thank you for the kindness help, but I did not get you. I do have a modified 2080ti with 22g VRAM. It can work well in WhisperDesktop.exe with Whisper.cpp large v3 model. Can you tell me which whisper type should I choose so that I can use GPU to boost the progress. Thanks again~ PixPin_2024-07-24_01-14-36

raivisdejus commented 1 month ago

Use Faster Whisper.

But you also need CUDA installed https://developer.nvidia.com/cuda-downloads

Aridea2021 commented 1 month ago

Use Faster Whisper.

But you also need CUDA installed https://developer.nvidia.com/cuda-downloads

so nice, thank you, raivisdejus