-
I get this CoreML error when running conversion with quantization:
```shell
python3 models/convert-whisper-to-coreml.py --model tiny.en --encoder-only True --quantize True
```
Stacktrace:
```…
-
Hi!
I heard about a very promising model some while ago that you might be interested in. It's called fish.audio.
Here's a youtube demo : https://www.youtube.com/watch?v=Ghc8cJdQyKQ
Here's the…
-
The primary source of triggers is the output of the overlap control.
Voice triggers come from many places. On certain events, triggers need to be sent.
More destinations could be added. For exampl…
-
Have you seen the new fish speech model https://github.com/fishaudio/fish-speech ?
Wonderful voice cloning and intonation performance.
Would you consider supporting it?
-
would be great to have an entrian sequencer that's an audio sampler... tried using the CV one with just audio in, doesn't work though, even with quantization turned off it seems like it does something…
-
Can I serve speechbrain trained model whisper with faster whisper?
-
Hello guys, I wrote a streaming inference pipeline in Python for this project, including torch jit script, int8 dynamic quantization, and streaming interface for the audio encoder and decoder (style v…
-
I want to reproduce the results in this paper "HIFI-CODEC: GROUP-RESIDUAL VECTOR QUANTIZATION FOR HIGH FIDELITY AUDIO CODEC". However, the description is so confused. The paper just said that the trai…
-
### Is your feature request related to a problem?
FluidSynth's "gain" is very variable depending on a myriad of possible situations; The programs in the Sound Font used being too loud over others; th…
-
### Describe the issue
Hi,
I was trying to statically quantize [this Coqui VITS model](https://github.com/coqui-ai/TTS/blob/e5fb0d96279af9dc620add6c2e69992c8abd7f24/TTS/.models.json#L143) that I h…