-
I am trying to use both of my GPUs who are passed through to my docker container.
```
services: faster-whisper-server-cuda: image: fedirz/faster-whisper-server:latest-cuda build: dockerfile: Dockerf…
-
Hi, since yesterdays update on my system on arch this started to happen when i try to run it on my amd machine, it used to work fine
before it was compiled with GGML_HIPBLAS=1 make -j
i have no ide…
-
### Contact Details
### What happened?
I use whisperfile without -tr flag but it translates anyway. How to turn it off?
./whisper-large-v3.llamafile -f ../whisper/2570523.wav
### Versi…
-
### Confirm this is an issue with the Python library and not an underlying OpenAI API
- [X] This is an issue with the Python library
### Describe the bug
The behaviour of the `AzureOpenAI` client d…
-
Hey! For a while, I've been running a fork of this wonderful tool, and it's great to see the maturity of it overall grow. I'm using gemma2 on ollama and faster-whisper-server to run the backend with g…
-
Need some real examples...
-
### Description
Currently if users get sent the permission to allow whisper to send them a notification, and they reject it, it just stops there.
we need a dialog that tells them what happens if…
-
can it support whisper-turbo?
-
- [x] Measure and record current performance.
- [x] Rebase the model to main, ensure the PCC = 0.99
- [ ] [Port functionality to n300 card (single device)](https://github.com/tenstorrent/tt-metal/pull…
-
Prompt executed in 11.65 seconds
got prompt
****** refer in EchoMimic V2 mode!******
loaded temporal unet's pretrained weights from C:\comfyui-aki-torch240\models\echo_mimic\unet ...
Load motion m…