Const-me / Whisper

High-performance GPGPU inference of OpenAI's Whisper automatic speech recognition (ASR) model
Mozilla Public License 2.0
7.65k stars 664 forks source link

add multiple gpus support #189

Open alighamdan opened 7 months ago

alighamdan commented 7 months ago

is it possible to add multi-gpus for making it more faster? like Nvidia supported with the intel or something like that?

RickArcher108 commented 7 months ago

Don't know about that, but if you get something like this, with an NVidia Geforce 4090, you should notice a huge speed boost. I did: https://buildredux.com/pages/build-your-pc

emcodem commented 7 months ago

For one single source "file" it will not be possible to split onto multiple gpu's in paralell. This is because each 30s chunk's translation depends on the results from the last one.

It does not look like this project is developed much further, so if you really need to get some more speed you need to do it yourself. Besides just getting better hardware as @RickArcher108 says, you can also split the input audio into multiple parts yourself and either do it on multiple workstations paralell or use the -gpu switch in commandline version to specify which gpu it shall run on.