dodeeric / langchain-ai-assistant-with-hybrid-rag

This is a LLM chatbot coded with LangChain. The web interface is coded with Streamlit. It implements a hybrid RAG (keyword and semantic search) and chat memory.
https://bmae-ragai-webapp.azurewebsites.net
GNU General Public License v3.0
8 stars 1 forks source link

run app in streamlit community cloud and in github codespace with chroma db server running on myvm2 #58

Closed dodeeric closed 2 weeks ago

dodeeric commented 2 weeks ago

MAYBE to adapt the embed part, or it works only if chroma db server runs on the same server has the app:

utils.py:

if embed: Chroma.from_documents(documents2, embedding_model, collection_name=COLLECTION_NAME, persist_directory="./chromadb") <=== should be the chroma client

Should be this:

chroma_client = chromadb.HttpClient(host=CHROMA_SERVER_HOST, port=CHROMA_SERVER_PORT) vector_db = Chroma(documents2, embedding_function=embedding_model, collection_name=COLLECTION_NAME, client=chroma_client)

assistant_backend.py:

like when chroma db server instantiated:

chroma_client = chromadb.HttpClient(host=CHROMA_SERVER_HOST, port=CHROMA_SERVER_PORT) vector_db = Chroma(embedding_function=embedding_model, collection_name=COLLECTION_NAME, client=chroma_client)