nydasco / rag_based_chatbot

MIT License
5 stars 2 forks source link

Chatbot with Session Memory and Vectorstore Retrieval

This project is a chatbot application with a Python backend and a React frontend. The chatbot leverages session memory and a vectorstore to retrieve information. The frontend is a custom and simple interface using React, and the connection between the backend and frontend is established using WebSockets.

Features

Table of Contents

Installation

Backend

  1. Clone the repository:

    git clone https://github.com/nydasco/rag_based_chatbot.git
    cd rag_based_chatbot
  2. Set up folders and LLM:

    mkdir processed
    mkdir to_process
    mkdir backend/llm

    Download Meta-Llama-3-8B-Instruct-Q8_0.gguf from HuggingFace into the backend/llm folder.

  3. Set up a virtual environment:

    cd backend
    python -m venv venv
    source venv/bin/activate
  4. Install the required dependencies:

    pip install -r requirements.txt
  5. Upload some data: Place pdf files to process into the /to_process folder.

    python process_files.py
  6. Run the backend server:

    python backend.py

Frontend

  1. Navigate to the frontend directory:

    cd nydasbot
  2. Install the required dependencies:

    npm install
  3. Start the frontend development server:

    npm start

Usage

Once both the backend and frontend servers are running, open your web browser and navigate to http://localhost:3000. You should see the chatbot interface.

Configuration

The backend configuration is managed through a parameters.toml file. This has been configured to work, however you may want to change some of the setting to better suit your environment.