Closed souvikqb closed 2 weeks ago
Dupe. You opened same issue few days ago: https://github.com/guillaumekln/faster-whisper/issues/553
Dupe. You opened same issue few days ago: #553
but i can't load ct2 model with transformer pipeline,is there anyway else?
Has anyone tried batch processing using faster_whisper?
By batch processing, I mean defining a batch_size and chunk_length which can help in a greater inference speed.
Something similar to whisper transformers pipeline - https://huggingface.co/openai/whisper-large-v3
If so pls redirect me to that resource/repo