This is a LLM chatbot coded with LangChain. The web interface is coded with Streamlit. It implements a hybrid RAG (keyword and semantic search) and chat memory.
admin --> assistant --> admin --> assistant --> ... ===> the function is each time run (running some seconds) (try on streamlit community cloud because it is slower than the vm)
@st.cache_resource
def instanciate_ai_assistant_chain(model, temperature):
without cache:
admin --> assistant --> admin --> assistant --> ... ===> the function is each time run (running some seconds) (try on streamlit community cloud because it is slower than the vm)