JohnnySn0w / Echo

Voice-to-voice personal assistant, Full-local, GPU company agnostic.
5 stars 1 forks source link

Split audio generation for LLM response if too large #18

Open JohnnySn0w opened 3 months ago

JohnnySn0w commented 3 months ago

splitting the responses when they get too big, into multiple audio streams, can allow us to deliver the streams separately. This would allow larger responses to get started audibly before they are done being processed.