-
Hi!
I happily stumbled into your video on faster-whisper and learnt runpod is a thing and that they have serverless. I am wondering if you have a guide or template on how to set up faster whisper s…
-
After a certain segment, all subsequent recognized texts are incorrect:
```
from openai import OpenAI
client = OpenAI(api_key="cant-be-empty", base_url="http://192.168.31.100:8000/v1/")
…
-
Can I serve speechbrain trained model whisper with faster whisper?
-
It's time to upgrade faster_whisper to support faster-whisper-large-v3-turbo.
```
model = stable_whisper.load_faster_whisper("faster-whisper-large-v3-turbo")
Invalid model size 'faster-whisper-…
-
Traceback (most recent call last):
File "/home/project/Whisper-WebUI/app.py", line 11, in
from modules.whisper.whisper_factory import WhisperFactory
File "/home/project/Whisper-WebUI/modul…
-
@sanchit-gandhi
Where can I find faster-whisper model evaluation metrics? I don't see them on ASR leaderboard. Thanks!
-
Hi, running Ubuntu 22.04 LTS with a faster-whisper and a custom Large model for Norwegian on Docker, w/GPU (GTX A4000).
Can I attach your WebUI to this container to use already running instance of…
-
I fine-tuned a Whisper large-v3 model via [speechbrain](https://github.com/speechbrain/speechbrain) framework. I want to convert it to `faster-whisper` model and run inference on it via `faster-whispe…
-
Traceback (most recent call last):
File "/usr/local/lib/python3.10/dist-packages/gradio/routes.py", line 412, in run_predict
output = await app.get_blocks().process_api(
File "/usr/local/li…
-
I am trying to use both of my GPUs who are passed through to my docker container.
`services: faster-whisper-server-cuda: image: fedirz/faster-whisper-server:latest-cuda build: dockerfile: Dockerfil…