-
I've been using faster-whisper-server via Docker for weeks with no issues with my transcription script on Ubuntu, but suddenly the server is just broken.
I get this error, whenever I try to transcr…
-
For the same whisper model (small), faster-whisper is even slower than origin whisper model.
To load the model, my code is :
"""
from faster_whisper import WhisperModel
model=WhisperModel("small"…
-
Hi there,
First off, amazing job on your paper/the model! It looks super promising.
I'm working on a project where I'm attempting to do live streaming with Whisper. One of the challenges there i…
-
Where can I find the instruction for deploying faster-whisper-v3-large with silero-vad? There is only whisper deploy on the official documantation.
-
Great work on the implementation! Just wondering if you have considered the integration of other whisper models into the pipeline, such as faster-whisper (https://github.com/SYSTRAN/faster-whisper?tab…
-
Implement a utility that will automatically generate both models from a HF whisper model.
yairl updated
1 month ago
-
when I try to do this the question is
ceback (most recent call last):
File "D:\anaconda\Lib\site-packages\gradio\routes.py", line 534, in predict
output = await route_utils.call_process_api(…
-
faster-whisper 升级到 1.03了,我比较了一下,不知道怎么替换
还是麻烦你升级吧
谢谢
-
日志如下
Load over
C:/Users/Long'Min/Desktop/yd/fasterwhispergui/whisper-large-v3-float32
max_length: 448
num_samples_per_token: 320
time_precision: 0.02
tokens_per_second: 50
input_…
-
How can I convert https://huggingface.co/primeline/whisper-large-v3-german to be used with faster-whisper?
Also, can faster-whisper use safetensors and can I convert the above to it?
EDIT: Whe…