-
Is this related to CTranslate?
The following is copied from [this ](https://github.com/SYSTRAN/faster-whisper/issues/618) .
I have made a test, for batching in faster-whisper.
But faster_whi…
-
I've been trying to make faster-whisper run on CUDA the whole day and it just does not work.
The model get initialized on CUDA with no issues, but when I try to actually run the model, it crashes …
-
```
[ctranslate2] [thread 10436] [warning] The compute type inferred from the saved model is float16, but the target device or backend do not support efficient float16 computation. The model weights …
-
**Is your feature request related to a problem? Please describe.**
I have to generate subtitles manually every time I add a new scene.
**Describe the solution you'd like**
Stash could automatical…
-
faster-whisper 升级到 1.03了,我比较了一下,不知道怎么替换
还是麻烦你升级吧
谢谢
-
Hey all, after a nice conversation with @MahmoudAshraf97 on a different repo I wanted to share some of my benchmark data. This was created using an RTX 4090 on Windows, no flash attention, with 5 be…
-
Hi there,
First off, amazing job on your paper/the model! It looks super promising.
I'm working on a project where I'm attempting to do live streaming with Whisper. One of the challenges there i…
-
Hi, I would like to use whisper as stt module, as i want to try a specific model.
Couls someone explain me how to configure the configuration.yaml?
I install whisper,
and without any further modifi…
-
日志如下
Load over
C:/Users/Long'Min/Desktop/yd/fasterwhispergui/whisper-large-v3-float32
max_length: 448
num_samples_per_token: 320
time_precision: 0.02
tokens_per_second: 50
input_…
-
How to run faster-whisper using CUDA ?