SYSTRAN / faster-whisper

Faster Whisper transcription with CTranslate2
MIT License
12.78k stars 1.07k forks source link

faster_whisper with Batching #558

Closed souvikqb closed 2 weeks ago

souvikqb commented 1 year ago

Has anyone tried batch processing using faster_whisper?

By batch processing, I mean defining a batch_size and chunk_length which can help in a greater inference speed.

Something similar to whisper transformers pipeline - https://huggingface.co/openai/whisper-large-v3

If so pls redirect me to that resource/repo

Purfview commented 1 year ago

Dupe. You opened same issue few days ago: https://github.com/guillaumekln/faster-whisper/issues/553

AlbieRWang commented 1 year ago

Dupe. You opened same issue few days ago: #553

but i can't load ct2 model with transformer pipeline,is there anyway else?