An Optimized Speech-to-Text Pipeline for the Whisper Model Supporting Multiple Inference Engine
MIT License
315
stars
32
forks
source link
Replace `multiprocessing.dummy.Pool()` with `concurrent.futures.ThreadPoolExecutor()` so whisper_s2t instance can run separately with `multiprocessing.Process()` #74
I tried loading whisper model and running transcription in a separate multiprocessing.Process(), but failed. The process just froze unresponsive when loading audio to memory with multiprocessing.dummy.Pool(). Replaced with a safer concurrent.futures.ThreadPoolExecutor() and it works now!
I tried loading whisper model and running transcription in a separate multiprocessing.Process(), but failed. The process just froze unresponsive when loading audio to memory with multiprocessing.dummy.Pool(). Replaced with a safer concurrent.futures.ThreadPoolExecutor() and it works now!