infiniflow / ragflow

RAGFlow is an open-source RAG (Retrieval-Augmented Generation) engine based on deep document understanding.
https://ragflow.io
Apache License 2.0
23.5k stars 2.3k forks source link
agent agents ai-search chatbot chatgpt data-pipelines deep-learning document-parser document-understanding genai graph graphrag llm nlp pdf-to-text preprocessing rag retrieval-augmented-generation table-structure-recognition text2sql
ragflow logo

English | 简体中文 | 日本語 | 한국어 | Bahasa Indonesia

follow on X(Twitter) Static Badge docker pull infiniflow/ragflow:v0.14.0 Latest Release license

Document | Roadmap | Twitter | Discord | Demo

📕 Table of Contents - 💡 [What is RAGFlow?](#-what-is-ragflow) - 🎮 [Demo](#-demo) - 📌 [Latest Updates](#-latest-updates) - 🌟 [Key Features](#-key-features) - 🔎 [System Architecture](#-system-architecture) - 🎬 [Get Started](#-get-started) - 🔧 [Configurations](#-configurations) - 🔧 [Build a docker image without embedding models](#-build-a-docker-image-without-embedding-models) - 🔧 [Build a docker image including embedding models](#-build-a-docker-image-including-embedding-models) - 🔨 [Launch service from source for development](#-launch-service-from-source-for-development) - 📚 [Documentation](#-documentation) - 📜 [Roadmap](#-roadmap) - 🏄 [Community](#-community) - 🙌 [Contributing](#-contributing)

💡 What is RAGFlow?

RAGFlow is an open-source RAG (Retrieval-Augmented Generation) engine based on deep document understanding. It offers a streamlined RAG workflow for businesses of any scale, combining LLM (Large Language Models) to provide truthful question-answering capabilities, backed by well-founded citations from various complex formatted data.

🎮 Demo

Try our demo at https://demo.ragflow.io.

🔥 Latest Updates

🎉 Stay Tuned

⭐️ Star our repository to stay up-to-date with exciting new features and improvements! Get instant notifications for new releases! 🌟

🌟 Key Features

🍭 "Quality in, quality out"

🍱 Template-based chunking

🌱 Grounded citations with reduced hallucinations

🍔 Compatibility with heterogeneous data sources

🛀 Automated and effortless RAG workflow

🔎 System Architecture

🎬 Get Started

📝 Prerequisites

🚀 Start up the server

  1. Ensure vm.max_map_count >= 262144:

    To check the value of vm.max_map_count:

    $ sysctl vm.max_map_count

    Reset vm.max_map_count to a value at least 262144 if it is not.

    # In this case, we set it to 262144:
    $ sudo sysctl -w vm.max_map_count=262144

    This change will be reset after a system reboot. To ensure your change remains permanent, add or update the vm.max_map_count value in /etc/sysctl.conf accordingly:

    vm.max_map_count=262144
  2. Clone the repo:

    $ git clone https://github.com/infiniflow/ragflow.git
  3. Build the pre-built Docker images and start up the server:

    The command below downloads the dev version Docker image for RAGFlow slim (dev-slim). Note that RAGFlow slim Docker images do not include embedding models or Python libraries and hence are approximately 1GB in size.

    $ cd ragflow/docker
    $ docker compose -f docker-compose.yml up -d
    • To download a RAGFlow slim Docker image of a specific version, update the RAGFLOW_IMAGE variable in docker/.env** to your desired version. For example, RAGFLOW_IMAGE=infiniflow/ragflow:v0.14.0-slim. After making this change, rerun the command above to initiate the download.
    • To download the dev version of RAGFlow Docker image including embedding models and Python libraries, update the RAGFLOW_IMAGE variable in docker/.env to RAGFLOW_IMAGE=infiniflow/ragflow:dev. After making this change, rerun the command above to initiate the download.
    • To download a specific version of RAGFlow Docker image including embedding models and Python libraries, update the RAGFLOW_IMAGE variable in docker/.env to your desired version. For example, RAGFLOW_IMAGE=infiniflow/ragflow:v0.14.0. After making this change, rerun the command above to initiate the download.

    NOTE: A RAGFlow Docker image that includes embedding models and Python libraries is approximately 9GB in size and may take significantly longer time to load.

  4. Check the server status after having the server up and running:

    $ docker logs -f ragflow-server

    The following output confirms a successful launch of the system:

    
         ____   ___    ______ ______ __               
        / __ \ /   |  / ____// ____// /____  _      __
       / /_/ // /| | / / __ / /_   / // __ \| | /| / /
      / _, _// ___ |/ /_/ // __/  / // /_/ /| |/ |/ / 
     /_/ |_|/_/  |_|\____//_/    /_/ \____/ |__/|__/ 
    
    * Running on all addresses (0.0.0.0)
    * Running on http://127.0.0.1:9380
    * Running on http://x.x.x.x:9380
    INFO:werkzeug:Press CTRL+C to quit

    If you skip this confirmation step and directly log in to RAGFlow, your browser may prompt a network anormal error because, at that moment, your RAGFlow may not be fully initialized.

  5. In your web browser, enter the IP address of your server and log in to RAGFlow.

    With the default settings, you only need to enter http://IP_OF_YOUR_MACHINE (sans port number) as the default HTTP serving port 80 can be omitted when using the default configurations.

  6. In service_conf.yaml.template, select the desired LLM factory in user_default_llm and update the API_KEY field with the corresponding API key.

    See llm_api_key_setup for more information.

    The show is on!

🔧 Configurations

When it comes to system configurations, you will need to manage the following files:

The ./docker/README file provides a detailed description of the environment settings and service configurations which can be used as ${ENV_VARS} in the service_conf.yaml.template file.

To update the default HTTP serving port (80), go to docker-compose.yml and change 80:80 to <YOUR_SERVING_PORT>:80.

Updates to the above configurations require a reboot of all containers to take effect:

$ docker compose -f docker/docker-compose.yml up -d

Switch doc engine from Elasticsearch to Infinity

RAGFlow uses Elasticsearch by default for storing full text and vectors. To switch to Infinity, follow these steps:

  1. Stop all running containers:

    $ docker compose -f docker/docker-compose.yml down -v
  2. Set DOC_ENGINE in docker/.env to infinity.

  3. Start the containers:

    $ docker compose -f docker/docker-compose.yml up -d

[!WARNING] Switching to Infinity on a Linux/arm64 machine is not yet officially supported.

🔧 Build a Docker image without embedding models

This image is approximately 1 GB in size and relies on external LLM and embedding services.

git clone https://github.com/infiniflow/ragflow.git
cd ragflow/
pip3 install huggingface-hub nltk
python3 download_deps.py
docker build -f Dockerfile.slim -t infiniflow/ragflow:dev-slim .

🔧 Build a Docker image including embedding models

This image is approximately 9 GB in size. As it includes embedding models, it relies on external LLM services only.

git clone https://github.com/infiniflow/ragflow.git
cd ragflow/
pip3 install huggingface-hub nltk
python3 download_deps.py
docker build -f Dockerfile -t infiniflow/ragflow:dev .

🔨 Launch service from source for development

  1. Install Poetry, or skip this step if it is already installed:

    curl -sSL https://install.python-poetry.org | python3 -
  2. Clone the source code and install Python dependencies:

    git clone https://github.com/infiniflow/ragflow.git
    cd ragflow/
    export POETRY_VIRTUALENVS_CREATE=true POETRY_VIRTUALENVS_IN_PROJECT=true
    ~/.local/bin/poetry install --sync --no-root --with=full # install RAGFlow dependent python modules
  3. Launch the dependent services (MinIO, Elasticsearch, Redis, and MySQL) using Docker Compose:

    docker compose -f docker/docker-compose-base.yml up -d

    Add the following line to /etc/hosts to resolve all hosts specified in docker/.env to 127.0.0.1:

    127.0.0.1       es01 infinity mysql minio redis

    In docker/service_conf.yaml.template, update mysql port to 5455 and es port to 1200, as specified in docker/.env.

  4. If you cannot access HuggingFace, set the HF_ENDPOINT environment variable to use a mirror site:

    export HF_ENDPOINT=https://hf-mirror.com
  5. Launch backend service:

    source .venv/bin/activate
    export PYTHONPATH=$(pwd)
    bash docker/launch_backend_service.sh
  6. Install frontend dependencies:

    cd web
    npm install --force
  7. Configure frontend to update proxy.target in .umirc.ts to http://127.0.0.1:9380:

  8. Launch frontend service:

    npm run dev 

    The following output confirms a successful launch of the system:

📚 Documentation

📜 Roadmap

See the RAGFlow Roadmap 2024

🏄 Community

🙌 Contributing

RAGFlow flourishes via open-source collaboration. In this spirit, we embrace diverse contributions from the community. If you would like to be a part, review our Contribution Guidelines first.