OpenBMB / RAGEval

Other
81 stars 5 forks source link

RAGEval: Scenario Specific RAG Evaluation Dataset Generation Framework

[![Python 3.10](https://img.shields.io/badge/python-%E2%89%A53.10-blue)](https://www.python.org/downloads/release/python-3109/) [![Arxiv](https://img.shields.io/badge/arXiv-2408.01262-red)](https://arxiv.org/pdf/2408.01262)
dragonball

Introduction

RAGEval is a novel framework designed for automatically generating evaluation datasets to assess the knowledge usage ability of different Large Language Models (LLMs) in various Retrieval-Augmented Generation (RAG) scenarios. Unlike existing RAG benchmarks that focus on general knowledge, RAGEval enables the creation of domain-specific factual queries, allowing for a more nuanced evaluation of RAG systems across different vertical domains.

News

Key Features

  1. πŸ—οΈ Flexible Schema Generation: Summarizes a schema from seed documents to capture domain-specific knowledge structures.

  2. πŸ”„ Diverse Document Generation: Uses the schema to generate varied configurations and subsequently diverse documents across multiple domains.

  3. ❓ Comprehensive QA Pair Creation: Constructs question-answering pairs based on generated documents and configurations.

  4. πŸ“Š Novel Evaluation Metrics: Introduces three new metrics - Completeness, Hallucination, and Irrelevance - for a more thorough assessment of RAG model responses.

  5. 🌐 Multi-Domain Support: Covers various domains including finance, legal, and medical sectors in both Chinese and English languages.

Components

  1. Schema Summary: Extracts domain-specific knowledge structures from seed documents.
  2. Document Generation: Creates diverse, factually rich documents based on the schema.
  3. QRA (Question-Reference-Answer) Generation: Produces comprehensive evaluation triples.
  4. DRAGONBall Dataset: A diverse RAG benchmark covering multiple domains and languages.
  5. Evaluation Metrics: Novel metrics for assessing RAG system performance.

Usage

Experiments

RAGEval has been used to benchmark various LLMs and RAG configurations:

Results

Conclusion

RAGEval provides a comprehensive framework for evaluating RAG systems in domain-specific scenarios, offering more nuanced insights than existing benchmarks. It highlights the potential for significant improvements in open-source models for RAG tasks.

Citation

Please cite the following paper if you find RAGEval helpful!

@misc{zhu2024ragevalscenariospecificrag,
      title={RAGEval: Scenario Specific RAG Evaluation Dataset Generation Framework}, 
      author={Kunlun Zhu and Yifan Luo and Dingling Xu and Ruobing Wang and Shi Yu and Shuo Wang and Yukun Yan and Zhenghao Liu and Xu Han and Zhiyuan Liu and Maosong Sun},
      year={2024},
      eprint={2408.01262},
      archivePrefix={arXiv},
      primaryClass={cs.CL},
      url={https://arxiv.org/abs/2408.01262}, 
}

Star History Chart