KoljaB / RealtimeTTS

Converts text to speech in realtime
1.65k stars 149 forks source link

Is there any way to convert the input text received by the realtimetts server into a text stream? #107

Open worker121 opened 1 month ago

worker121 commented 1 month ago

Hi, I want to know is there anyway to change the input text to a text stream to fit the language model's output.

threading.Thread( target=play_text_to_speech, args=(stream, text), daemon=True).start()

by changing the text to a text iterator, how to make it work?

I use a coroutine to receive the data stream, and put it in a queue,

receive_thread = threading.Thread(target=asyncio.run, args=(receive_stream(request),), daemon=True) receive_thread.start()

async def receive_stream(request: Request): try: async for chunk in request.stream(): print("chunk", chunk) data_queue.put(chunk.decode()) except Exception as e: print(f"Error processing stream: {e}") finally: data_queue.put(None) # 标记流结束

and define a string iterator try to inplace the original text

def string_iterator() -> Iterator[str]: """从队列中读取数据并生成字符串迭代器""" item = data_queue.get() print(item) if item is None: return yield item

threading.Thread( target=play_text_to_speech, args=(stream, string_iterator()), daemon=True).start()

and the above went wrong, is there any better way to achieve it?

KoljaB commented 1 month ago

Code example, how it could look like:

def generator(request):
    try:
        for chunk in request.stream():
            yield chunk
    except Exception as e:
        print(f"Error processing stream: {e}")

stream.feed(generator(request))
stream.play_async()
KoljaB commented 1 month ago

Here is example code using a generator from openi llm:

https://github.com/KoljaB/RealtimeTTS/blob/master/tests/openai_voice_interface.py#L36-L45

worker121 commented 1 month ago

Thanks for the code, but error still occurs by using the code example. 'Error processing stream: 'async_generator' object is not iterable'

it looks like we cannot feed the generator(request) directly to the stream.

So, is that possible by using the fastapi to make this server capable of accepting streaming data, rather than having the server make request calls to an LLM service to obtain streaming data.

KoljaB commented 1 month ago

Thanks for the code, but error still occurs by using the code example. 'Error processing stream: 'async_generator' object is not iterable'

Can you please post your code of async_generator? 'Error processing stream: 'async_generator' object is not iterable' means that async_generator is not iterable, so it's also not a generator. Play methods need a generator, so there's something wrong here.

it looks like we cannot feed the generator(request) directly to the stream.

Of course we can, I use this every day. Please look at the example I already posted yesterday, I do something like this there.

So, is that possible by using the fastapi to make this server capable of accepting streaming data, rather than having the server make request calls to an LLM service to obtain streaming data.

Should surely be possible to change the code a bit to achieve that.

worker121 commented 1 month ago

Thank you for the response. I have modified the code to meet my requirements.