collabora / WhisperLive

A nearly-live implementation of OpenAI's Whisper.
MIT License
1.52k stars 204 forks source link

Question about parallelism using whisper-live vs. faster-whisper on a single GPU #250

Open DinnoKoluh opened 5 days ago

DinnoKoluh commented 5 days ago

Firstly, thank you for a great repository.

I have a question regarding parallelism using whisper-live vs. faster-whisper on a single GPU. In this faster-whisper issue the user asked if near-linear scaling would be possible to achieve on a single GPU and the answer was negative. I tried it myself and couldn't achieve linear scaling (e.g. if the transcription of a single file takes 15 seconds, running two files in two threads would take 30 seconds to complete).

In this issue here on whisper-live you claim that 4 parallel streams would be possible without much degradation on a single GPU and I tested it for three streams and it really doesn't show much degradation.

From what I understood from the faster-whisper implementation is that when transcribing a file, the model takes up all the GPU resources for the current chunk of the file to be transcribed. So, when running it in multiple threads basically the threads compete with each other and wait for the other one to complete transcribing the chunk from the respective audio file. I guess the same things happen here but the linear scaling is observable when using multiple threads. I went through the code but couldn't really see anything I didn't already try myself using threading to achieve scaling with just faster-whisper without much success.

So, my question is, how was it possible to achieve this performance?

Bodawen commented 4 days ago

Same question from here