danieldeutsch / sacrerouge

SacreROUGE is a library dedicated to the use and development of text generation evaluation metrics with an emphasis on summarization.
Apache License 2.0
140 stars 14 forks source link

SacreROUGE

Master

New (2022-04-22): The metric correlation confidence intervals/hypothesis tests from A Statistical Analysis of Summarization Evaluation Metrics Using Resampling Methods and the modified system-level correlation calculations from Re-Examining System-Level Correlations of Automatic Summarization Evaluation Metrics can more easily be used with the nlpstats library.

New (2021-08-04): We now have Docker versions of several evaluation metrics included in the library, which makes it even easier to run them as long as you have Docker installed. Our implementations are wrappers around the metrics included in the Repro library. See here for more information about the Dockerized metrics.

SacreROUGE is a library dedicated to the development and use of summarization evaluation metrics. It can be viewed as an AllenNLP for evaluation metrics (with an emphasis on summarization). The inspiration for the library came from SacreBLEU, a library with a standardized implementation of BLEU and dataset readers for common machine translation datasets. See our paper for more details or this Jupyter Notebook that was presented at the NLP-OSS 2020 and Eval4NLP 2020 workshops for a demo of the library.

The development of SacreROUGE was motivated by three problems:

The two main uses of SacreROUGE are to evaluate summarization systems and to evaluation the evaluation metrics themselves by calculating their correlations to human judgments.

Installing

The easiest method of using SacreROUGE is to install the pypi library via:

pip install sacrerouge

This will add a new sacrerouge bash command to your path, which serves as the primary interface for the library.

Tutorials

We provide several different tutorials for how to use SacreROUGE based on your use case:

Setting up a Dataset

SacreROUGE contains data to load some summarization datasets and save them in a common format. Run the sacrerouge setup-dataset command to see the available datasets, or check here.

Data Visualization

We have also written two data visualization tools. The first tool visualizes a Pyramid and optional Pyramid annotations on peer summaries. It accepts the pyramid.jsonl and pyramid-annotations.jsonl files which are saved by some of the dataset readers.

The second tool visualizes the n-gram matches that are used to calculate the ROUGE score. It accepts the summaries.jsonl files which are saved by some of the dataset readers.

Papers

Relevant publications which are implemented in the SacreROUGE framework include:

Help

If you have any questions or suggestions, please open an issue or contact me (Dan Deutsch).

Citation

If you use SacreROUGE for your paper, please cite the following paper:

@inproceedings{deutsch-roth-2020-sacrerouge,
    title = {{SacreROUGE: An Open-Source Library for Using and Developing Summarization Evaluation Metrics}},
    author = "Deutsch, Daniel  and
      Roth, Dan",
    booktitle = "Proceedings of Second Workshop for NLP Open Source Software (NLP-OSS)",
    month = nov,
    year = "2020",
    address = "Online",
    publisher = "Association for Computational Linguistics",
    url = "https://www.aclweb.org/anthology/2020.nlposs-1.17",
    pages = "120--125"
}