-
I did a very rough comparison of https://github.com/guillaumekln/faster-whisper and whisper.cpp, turns out faster-whisper is faster than whisper.cpp in CPU.
For eg. It takes faster-whisper 14second…
-
Great work on the implementation! Just wondering if you have considered the integration of other whisper models into the pipeline, such as faster-whisper (https://github.com/SYSTRAN/faster-whisper?tab…
-
(base) [root@app2 ~]# docker run --gpus all -p 1080:8000 -v /app:/root/.cache/huggingface/ 784630b8bc0a
==========
== CUDA ==
==========
CUDA Version 12.2.2
Container image Copyright (c) 2…
-
@abdeladim-s @BBC-Esq
https://huggingface.co/openai/whisper-large-v3-turbo
We working on supporting this? Or we at the point where we can just drop the weights in?
It's supposed to be *a lot …
-
can it support whisper-turbo?
-
Hi there,
First off, amazing job on your paper/the model! It looks super promising.
I'm working on a project where I'm attempting to do live streaming with Whisper. One of the challenges there i…
-
Hello
I have tried to transcribe an audio file which is mixed with Telugu and English for an interview with a health professional.
When I set language for auto detection, it displayed as - *Detecte…
-
### Is there an existing issue for this?
- [X] I have searched the existing issues
### Current Behavior
I do run Home Assistant as a docker container under Unraid.
I installed the 3 Wyoming docker…
-
Does whisperx support the new large-V3 turbo model?
-
报错:Could not load library libcudnn_ops_infer.so.8. Error: libcudnn_ops_infer.so.8: cannot open shared object file: No such file or directory