-
So from what I've seen when the script runs it attempts to run as a GPU if one is present, which of course is great. In fact I think it's even the default. For whatever reason it doesn't run as GPU on…
-
Has anyone tried batch processing using faster_whisper?
By batch processing, I mean defining a batch_size and chunk_length which can help in a greater inference speed.
Something similar to whis…
-
这里是部分的安装日志,安装到了最后一步,sub_module,请收下我的感谢。
Requirement already satisfied: pillow>=8 in /root/miniconda3/envs/linly_dubbing/lib/python3.10/site-packages (from matplotlib>=3.7.0->TTS==0.22.0->-r /gemini/c…
-
Error is:
```
ggml_metal_init: load pipeline error: Error Domain=AGXMetalA12 Code=3 "Encountered unlowered function call to air.simd_max.f32" UserInfo={NSLocalizedDescription=Encountered unlowered…
-
Make the recording option of the transcription real-time
This will make it easier to use the application in a live transcription scenario.
Instead of the microphone recording alone, as it re…
-
Instead of printing the raw result's dict, we should print a more user friendlier output, something that includes a progress bar.
Example:
```
[2024/10/14 12:00] Starting benchmark: provider "f…
-
Have you tried building the spectrogram and encoder output in smaller chunks and appending? I think the spectrogram should generate fairly easily with minimal noise depending on the size of the chunk,…
-
## Issue
I implemented a WebSocket-based version of the `whisper_online_server` to handle audio streams from clients over WebSocket connections. The implementation works as expected when a single cli…
-
我已经从huggingface上下载了 BELLE-2/Belle-whisper-large-v3-zh-punct 和 Systran/faster-whisper-large-v3 两个模型。并按照配置文件的路径,放到了cache目录下。但是项目启动开始执行后,依然会再去下载模型。是放置本地模型的姿势哪里不对吗?路径命名有什么特别注意的吗?希望能得到解答
-
I have been trying for a few hours and haven't been able to get it to run through terminal, and am faced with new errors everytime