I'm in the process of integrating multiple whisper backends into a unified package that includes VAD-based chunking. During testing, I observed significantly higher inference times while using the HuggingFace pipeline with distil-whisper. You can find the details here: https://github.com/shashikg/WhisperS2T/releases/tag/v1.1.0 [A30 GPU]
Hey I think HF ChunkPipeline sets anything greater than num_worker>0 to num_worker=1. See here. Though I will once run the benchmark after setting this to a higher number.
That should not be an issue, for distil-whisper I only ran the benchmark on KINCAID WAV. this
Hi @sanchit-gandhi !
I'm in the process of integrating multiple whisper backends into a unified package that includes VAD-based chunking. During testing, I observed significantly higher inference times while using the HuggingFace pipeline with distil-whisper. You can find the details here: https://github.com/shashikg/WhisperS2T/releases/tag/v1.1.0 [A30 GPU]
Could you please review the benchmarking script I'm using? It's available at: https://github.com/shashikg/WhisperS2T/blob/main/scripts/benchmark_huggingface_distil.py
Thanks for your assistance!
Shashi