Open santhosh-sp opened 11 months ago
Is this possible?
Yes, Its possible.
Is it build into the [xtts-streaming-server] repo ? or it has to be tweaked?
I was getting ready to test it out this weekend before i install it.
Any chance you can post some working examples? was able to get the docker working but i dont see any logic for providing the yield chunks as the text to the api
is splitting at end of sentence (.?!) the best option here?
def llm_write(prompt: str): buffer = "" for chunk in openai.ChatCompletion.create( model="gpt-3.5-turbo", message=[{"role": "user", "content": prompt}], stream=True ): if (text_chunk := chunk["choice"][0]["delta"].get("content")) is not None: buffer += text_chunk if should_send_to_tts(buffer): # Define this function to decide when to send yield buffer buffer = "" # Reset buffer after sending
text_stream = llm_write("Hello, what is LLM?")
for text in text_stream: audio = stream_ffplay( tts( text, speaker, language, server_url, stream_chunk_size ), output_file, save=bool(output_file) )
I believe input needs to be at least a sentence, as speech relies heavily on the context provided by subsequent words.
def llm_write(prompt: str): buffer = "" for chunk in openai.ChatCompletion.create( model="gpt-3.5-turbo", message=[{"role": "user", "content": prompt}], stream=True ): if (text_chunk := chunk["choice"][0]["delta"].get("content")) is not None: buffer += text_chunk if should_send_to_tts(buffer): # Define this function to decide when to send yield buffer buffer = "" # Reset buffer after sending
text_stream = llm_write("Hello, what is LLM?")
for text in text_stream: audio = stream_ffplay( tts( text, speaker, language, server_url, stream_chunk_size ), output_file, save=bool(output_file) )
does this work
Hello Team,
Is it possible to run TTS streaming with streaming input text with same file name?
Example:
With minimum words to the TTS api.
Thanks, Santhosh