-
I read this https://github.com/ggerganov/whisper.cpp/issues/1099 so that I can configure the language in build time.
https://github.com/ggerganov/whisper.cpp/blob/021eef1000b0a84cc08575aac3352116c72e…
-
Hello,
I am using the faster-whisper-server on a Mac M1 with the following start command:
_docker run --publish 8000:8000 --volume ~/.cache/huggingface:/root/.cache/huggingface fedirz/faster-whi…
-
Updated from 1.0.3 to 1.1.0. Now an onnxruntime thread affinity crash occurs each time. Both versions run on a Nvidia A40 with 4 CPU cores, 48GB VRAM and 16GB RAM (on a private Replicate server). Sho…
-
With local deployment, the PRELOAD_MODELS config variable works perfectly :
```
PRELOAD_MODELS='["Systran/faster-whisper-medium.en", "Systran/faster-whisper-small.en"]' MAX_MODELS=2 uvicorn main:a…
-
Hello.
After creating a docker container following the tutorial video and readme, I tried Live Transcription of microphone input using ffmpeg, but it didn't work properly.
After checking the doc…
-
Currently, the language ID based whisper outputs only one language with the top probability:
```rust
fn main() {
let file_path = std::env::args().nth(1).expect("Missing file path argument");
…
-
I attempted to use faster-whisper in PyCharm, created a new project with dedicated venv and followed the installation instructions given in this repo. However, when I tried to run the example script …
-
I am a noob and i can't use it offline. I have follow all the processus but it not working. Can some one explaint me how to do please ?
I downloaded whisper yesterday, but I needed the dictation fe…
-
Hi
Is it possible to add turbo whisper models like `deepdml/faster-whisper-large-v3-turbo-ct2` ?
I saw that they are actually very fast when running them manually on my device and would be nice to h…
-
Hello,
We are currently looking to add faster-whisper into the [Open ASR Leaderboard](https://huggingface.co/spaces/hf-audio/open_asr_leaderboard).
Here is the script we are using to run the ev…