k2-fsa / sherpa-onnx

Speech-to-text, text-to-speech, speaker recognition, and VAD using next-gen Kaldi with onnxruntime without Internet connection. Support embedded systems, Android, iOS, Raspberry Pi, RISC-V, x86_64 servers, websocket server/client, C/C++, Python, Kotlin, C#, Go, NodeJS, Java, Swift, Dart, JavaScript, Flutter, Object Pascal, Lazarus, Rust
https://k2-fsa.github.io/sherpa/onnx/index.html
Apache License 2.0
3.11k stars 360 forks source link

[FR][TTS][Android]Fibo split #1222

Open mablue opened 1 month ago

mablue commented 1 month ago

The Main problem is that for blinds while using screen readers we have much latency cuz of sherpa models processing algorithm. How we can fix this problem? We used punctuation marks to split texts to small parts to process fast in costume screenreaders. But some times in a social media or anywhere we have many long messages that the users not used any punctuation marks inside them

Please append a punctuation marks fild and fibo split checkmark to make processing faster. That a logarithmic algo like Fibonacci will use spliting and sending things to models that it makes engine faster to start speaking. images (27)

1) Split First word and send to model and speak it than 1) split second and speak it 2) split two later words and speak them 3) split three later words and speak them 5) split 5 later words and speak them ...

csukuangfj commented 1 month ago

Would you like to contribute?

An alternative is to split a sentence into smaller ones if the number of tokens is greater than some threshold, say, 10.

mablue commented 1 month ago

Yes I want,where should start? Yes your porpuse is fast and simple. I will work on it.

csukuangfj commented 1 month ago

Please see https://github.com/k2-fsa/sherpa-onnx/blob/9ee2943ed45d9bb80bfd33f178aae7259d94188b/sherpa-onnx/csrc/offline-tts-vits-impl.h#L178-L179

If token_ids[i].tokens.size() is larger than a threshold, then you need to split token_ids[i].tokens into smaller pieces so that each piece's length is smaller than the given threshold.

The basic knowledge is knowing how to split a std::vector<int32_t> into smaller vectors.

GeminiT369 commented 1 month ago

I think this should be carefully considered. When the VITS model synthesizes sounds, the pronunciation of a word is related to its context. Improper segmentation may lead to low synthesis effects.

mablue commented 1 month ago

Please see

https://github.com/k2-fsa/sherpa-onnx/blob/9ee2943ed45d9bb80bfd33f178aae7259d94188b/sherpa-onnx/csrc/offline-tts-vits-impl.h#L178-L179

If token_ids[i].tokens.size() is larger than a threshold, then you need to split token_ids[i].tokens into smaller pieces so that each piece's length is smaller than the given threshold.

The basic knowledge is knowing how to split a std::vector<int32_t> into smaller vectors.

Ops cpp ?!! :D Owkey I will get a try

@GeminiT369 it is very important for someone who uses screen readers. It will be optional that you use or not use this thing also in languages like persian we want to just have a alternative to e-speak and it doesn't matter if it have very small synthesis effects. But for for someone with visually problem it's important to have a fast tts. I think maybe if we can reduce models to 2mb it will fix all problems...but we cant do it. Or I dont know? Maybe we can?

csukuangfj commented 1 month ago

By the way, we can limit the changes only to Persian.

danpovey commented 3 weeks ago

Maybe the latency or Mac sentence length can be made tunable? Having finite max sentence length may also prevent OOM.

On Mon, Aug 12, 2024, 2:36 AM Fangjun Kuang @.***> wrote:

By the way, we can limit the changes only to Persian.

— Reply to this email directly, view it on GitHub https://github.com/k2-fsa/sherpa-onnx/issues/1222#issuecomment-2283143167, or unsubscribe https://github.com/notifications/unsubscribe-auth/AAZFLO72AJKGZLXPX2C6XWLZRBCUVAVCNFSM6AAAAABMANJRIOVHI2DSMVQWIX3LMV43OSLTON2WKQ3PNVWWK3TUHMZDEOBTGE2DGMJWG4 . You are receiving this because you are subscribed to this thread.Message ID: @.***>