However, if I try to use streaming reponse like the example, returning with "return response.response_gen"
Open WebUI is unable to display any result and leaving the chat as below
There is no error.
Tried to run llamaindex streaming response locally with "response.print_response_stream()" does print the result word by word.
Any help would be appreciated, thank you.
Edit: My bad, my logging of debug messages caused that.
I am using llamaindex with Open WebUI pipeline, following the example on https://github.com/open-webui/pipelines/blob/main/examples/pipelines/rag/llamaindex_pipeline.py
However, if I try to use streaming reponse like the example, returning with "return response.response_gen" Open WebUI is unable to display any result and leaving the chat as below
There is no error.
Tried to run llamaindex streaming response locally with "response.print_response_stream()" does print the result word by word.
Any help would be appreciated, thank you.
Edit: My bad, my logging of debug messages caused that.