I built a small RAG with a local embedding model in the normal python-based llamaindex. How do I use this react-based chat application with the python-based chat engine? Or what is the idiomatic way to have a GUI chat for the python-based llamaindex?
I built a small RAG with a local embedding model in the normal python-based llamaindex. How do I use this react-based chat application with the python-based chat engine? Or what is the idiomatic way to have a GUI chat for the python-based llamaindex?