-
### Describe the feature you'd like to request
For visibility. See: https://pypi.org/project/whisper-ctranslate2/
Whisper command line client compatible with original [OpenAI client](https://githu…
-
Hello,
Is batch execution of faster-whisper's transcribe possible? We've seen in this [thread](https://github.com/OpenNMT/CTranslate2/issues/1119) that batch execution should increase the throughput.…
-
Hi, I would like to use whisper as stt module, as i want to try a specific model.
Couls someone explain me how to configure the configuration.yaml?
I install whisper,
and without any further modifi…
-
Since #856 got merged, I was wondering if we can have sending multiple files in one go into faster-whisper, something like:
```py
from faster_whisper import WhisperModel, BatchedInferencePipeline
…
-
Program (r192.3.4) crashes at the end of execution, but before generating a subtitle file on some videos with tiny model, but usually exits correctly with other models on the same video (it may not be…
-
Hi
If the `tokenizer.json` isn't available in the model directory, the faster-whisper loaded automatically downloads the tokenizer from huggingface which is a good thing. However, it always downloads…
-
It was very promising that faster-whisper actually started, but when audio is being processed, I get this:
INFO:__main__:Ready
INFO:faster_whisper:Processing audio with duration 00:01.530
Could n…
-
Hi! While I was trying to develop a PR for the shutdown lockup issue, I noticed that the recent commits on master broke model initialization:
```
RealTimeSTT: root - ERROR - Error initializing mai…
-
The new features, such as "multi-segment language detection" and "Batched faster-whisper", are not available on the latest version 1.0.3. Do you have any plans to release it?Is there anything I shoul…
-
I am performing a large number of transcriptions on limited GPU space.
I would like to cancel the model forwarding as soon as I know that I won't need the result. Is it possible to do this with fas…