-
If I pass in `mps` to device option it will crush. Would be wonderful if M1 GPU can be supported
```
❯ whisperx assets/test.mp3 --device mps --model large-v2 --vad_filter --align_model WAV2VEC2_AS…
-
I have been trying to get whisperX to work on my GTX 970, but have been running into a myriad of problems. Please bear with me as I’m a beginner in all things programming.
I followed all the instal…
-
Hello!
I'm getting that error from this lib when running whisperx on a mp3. The complete traceback is:
```
Traceback (most recent call last):
File "/home/facundo/devel/envwhisperx/bin/whispe…
-
Is it possible to investigate the problems of ctranslate2 in more detail? The library is one of the fastest and supports token streaming. Unfortunately with beam search no token streaming is possible …
-
Hello, just wanted to start a discussion about running whisper-ctranslate2 in Docker. Referencing [#109 of faster-whisper](https://github.com/SYSTRAN/faster-whisper/issues/109#issuecomment-1498303518)…
-
According to some [recent analysis](https://twitter.com/HamelHusain/status/1685074309549858816) on twitter, [CTranslate2](https://github.com/OpenNMT/CTranslate2) can serve LLMs a little faster than vL…
-
I use whisper-ctranslate CLI for faster-whisper. After updating ctranslate2 to 4.0.0, after allocating a given GPU, e.g. [1], some data is loaded into GPU[0] with a size of about 250 mb and GPU [0] is…
-
It was very promising that faster-whisper actually started, but when audio is being processed, I get this:
INFO:__main__:Ready
INFO:faster_whisper:Processing audio with duration 00:01.530
Could n…
-
Hey,
I am currently stuck if I want to enable rope scaling for Llama-2 models.Is it supported? I went through the documentation but there is not enough guidance on how to go about this.
Any help…
-
Hello, I am very happy to finally wait for the demo of UnitySentis. This surprised me. I am a speech recognition algorithm engineer. We often encounter such problems, the inference speed of the model,…
YLQY updated
5 months ago