This is a LLM chatbot coded with LangChain. The web interface is coded with Streamlit. It implements a hybrid RAG (keyword and semantic search) and chat memory.
MAYBE to adapt the embed part, or it works only if chroma db server runs on the same server has the app:
utils.py:
if embed:
Chroma.from_documents(documents2, embedding_model, collection_name=COLLECTION_NAME, persist_directory="./chromadb") <=== should be the chroma client
MAYBE to adapt the embed part, or it works only if chroma db server runs on the same server has the app:
utils.py:
if embed: Chroma.from_documents(documents2, embedding_model, collection_name=COLLECTION_NAME, persist_directory="./chromadb") <=== should be the chroma client
Should be this:
chroma_client = chromadb.HttpClient(host=CHROMA_SERVER_HOST, port=CHROMA_SERVER_PORT) vector_db = Chroma(documents2, embedding_function=embedding_model, collection_name=COLLECTION_NAME, client=chroma_client)
assistant_backend.py:
like when chroma db server instantiated:
chroma_client = chromadb.HttpClient(host=CHROMA_SERVER_HOST, port=CHROMA_SERVER_PORT) vector_db = Chroma(embedding_function=embedding_model, collection_name=COLLECTION_NAME, client=chroma_client)