run-llama / sec-insights

A real world full-stack application using LlamaIndex
https://www.secinsights.ai/
MIT License
2.32k stars 631 forks source link

use of anyio vs queue object #112

Open lppier opened 2 months ago

lppier commented 2 months ago

Hi, firstly, thank you so much for doing this. this is not an issue, not sure where else i could post this. If there is a better place, pls let me know.

i saw that anyio was used in this manner to store the streaming message objects https://github.com/run-llama/sec-insights/blob/main/backend/app/api/endpoints/conversation.py#L96

Wondering if there are any advantages using this vs using a python asynchronous queue object? I'm asking because in my company policy bot's RAG implementation i'm currently storing it in a python queue object.

Does using anyio have any latency benefits?

Thanks!