-
Thanks for the great work on the app!
I have an AMD x86_64 Linux machine and was interested in trying out the GPU acceleration to Whisper from this add on. However, I had read that the FasterWhispe…
-
fasterWhispe似乎不支持低端显卡,用CPU跑,识别又不准确,不知有没有其他语音识别,适合tesla老显卡的
-
Hello, I have installed Speech Note on a rather old notebook and it works really well.
There's only one small issue: I can speak in German or English and it will always output English. All this altho…
-
Hi @Uberi I wrote some extensions to your API for faster whisper and distil whisper that just need to be added to the _init_.py file to work - they will load the models automatically.
```
def rec…
-
When transcribing an hour of opus audio with either WhisperCPP Tiny or FasterWhisper Tiny, my CPU utilization looks like this:
![image](https://github.com/user-attachments/assets/f990d2dc-6db6-4e72…
yump updated
2 months ago
-
A suggestion would be to add support for [Faster Whisper](https://github.com/guillaumekln/faster-whisper/), which is much faster and uses much less VRAM than Whisper. You can use the Whisper Large V2 …
-
preprocessor_config.json: 100%|████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 340/340 [00:00
-
This would be awesome, currently I've looked into vosk/kaldi, openwhisper, and fasterwhisper. I think fasterwhisper has the best performance to compute-time ratio, though vosk/kaldi is wicked fast fo…
-
```
path = r"D:\Project\Python_Project\FasterWhisper\large-v3"
model = WhisperModel(model_size_or_path=path, device="cuda", local_files_only=True)
segments, info = model.transcribe("audio.wav",…
-
just a FYI, OpenAI released a new version of Whisper Large V3, which is faster than the original model with minimal performance degradation, consider adding it:
[Whisper Large V3 Turbo HF Link](htt…