ai-ng / swift

Fast voice assistant powered by Groq, Cartesia, and Vercel.
https://swift-ai.vercel.app
MIT License
439 stars 77 forks source link

Streaming #9

Open braco opened 1 month ago

braco commented 1 month ago

Wouldn't it be better to use streaming interfaces in both the llm and speech systems?

For example:

https://github.com/elevenlabs/elevenlabs-js/issues/4#issuecomment-2004696164

vercel should support this:

https://vercel.com/docs/functions/streaming

athrael-soju commented 1 month ago

To stream via groq you'd want to set stream: true in the completions call. But then you will have to face the challenge on how TTS will handle the stream. You can't begin synthesis immediately, because that means you'll attempt to generate speech from just a few tokens that may not form a sentence. This will sound very bad to the user.

If you choose to tokenize the stream chunks into sentences you will have to add logic to queue/dequeue sentences as they come before you send them for synthesis. This would work, but adds computation overhead and complexity. I added something like this in an older project

Best case scenario here is since groq is really fast, send the text response to the speech API and just stream the Speech itself back to the front end.