-
### Question or Issue
I installed the nexa by command from README:
```
CMAKE_ARGS="-DGGML_METAL=ON -DSD_METAL=ON" pip install nexaai --prefer-binary --index-url https://nexaai.github.io/nexa-sdk/…
-
- [x] Measure and record current performance.
- [x] Rebase the model to main, ensure the PCC = 0.99
- [ ] [Port functionality to n300 card (single device)](https://github.com/tenstorrent/tt-metal/pull…
-
Verbose flag is set to `false`, yet too much is logged, like that:
```
[dev:server]
[dev:server] stderr--- whisper_init_from_file_with_params_no_state: loading model from './models/ggml-base.bin…
-
When printing a whisper chat message from some player to the client (via the `message` fn), it prints:
`You whisper to {player}: {message}`
It should be:
`{player} whispers to you: {message}`
…
-
It's time to upgrade faster_whisper to support faster-whisper-large-v3-turbo.
```
model = stable_whisper.load_faster_whisper("faster-whisper-large-v3-turbo")
Invalid model size 'faster-whisper-…
-
Hey! For a while, I've been running a fork of this wonderful tool, and it's great to see the maturity of it overall grow. I'm using gemma2 on ollama and faster-whisper-server to run the backend with g…
-
Hello,
I found in my testing that the turbo model of faster-whisper is slower than OpenAI's. I would like to know if my conclusion is correct?
- openai/whisper-large-v3-turbo
- mobiuslabsgmbh/faste…
-
If there is a way to auto-detect between language model files and asr model files. We should do that, or if that's not possible we should just use a runtime flag, so some options for the runtime flag …
-
Congratulations on your release @LaurinmyReha @BernhardThNyra ! It would be wonderful to host a Gradio app for CrisperWhisper on Huggingface Spaces using our free A100 Community grants. For instance, …
-
In my tests Distil-Whisper models are inferior/not something to use comparing to Open Whisper models largev2/v3. Maybe OWSM models could be better? Could they be added? Or how to add them manually to …