This project is a chatbot application with a Python backend and a React frontend. The chatbot leverages session memory and a vectorstore to retrieve information. The frontend is a custom and simple interface using React, and the connection between the backend and frontend is established using WebSockets.
Clone the repository:
git clone https://github.com/nydasco/rag_based_chatbot.git
cd rag_based_chatbot
Set up folders and LLM:
mkdir processed
mkdir to_process
mkdir backend/llm
Download Meta-Llama-3-8B-Instruct-Q8_0.gguf
from HuggingFace into the backend/llm
folder.
Set up a virtual environment:
cd backend
python -m venv venv
source venv/bin/activate
Install the required dependencies:
pip install -r requirements.txt
Upload some data:
Place pdf files to process into the /to_process
folder.
python process_files.py
Run the backend server:
python backend.py
Navigate to the frontend directory:
cd nydasbot
Install the required dependencies:
npm install
Start the frontend development server:
npm start
Once both the backend and frontend servers are running, open your web browser and navigate to http://localhost:3000
. You should see the chatbot interface.
Enter
or click the "Send" button.The backend configuration is managed through a parameters.toml
file. This has been configured to work, however you may want to change some of the setting to better suit your environment.