fishaudio / fish-speech

Brand new TTS solution
https://speech.fish.audio
Other
14.49k stars 1.1k forks source link

[Feature] existing streaming latency is still takes time, #417

Closed kunci115 closed 3 weeks ago

kunci115 commented 3 months ago

streaming in 4090 tooks more than 2 second depend on length of token, is there a way to yield it/return while the engine still generating?

Stardust-minus commented 3 months ago

PR Welcome

PoTaTo-Mika commented 3 months ago

Please compile the model, or try the quantized version.

kunci115 commented 3 months ago

@PoTaTo-Mika what do you mean by compile the model ? also how to do quantized version? since I only do steps for inference in english documentation https://speech.fish.audio/en/inference/#2-create-a-directory-structure-similar-to-the-following-within-the-ref_data-folder

PoTaTo-Mika commented 3 months ago

there's a python file called quantize.py, you can view the file and choose to quantize. image

kunci115 commented 3 months ago

there's a python file called quantize.py, you can view the file and choose to quantize. image

its creating me a folder quantized version of the model now, just run it like previous run with that checkpoints model? still got the same latency

github-actions[bot] commented 2 months ago

This issue is stale because it has been open for 30 days with no activity.