WanzhengZhu / GRUEN

GRUEN for Evaluating Linguistic Quality of Generated Text (EMNLP 2020 Findings)
https://arxiv.org/pdf/2010.02498.pdf
MIT License
27 stars 12 forks source link
evaluation-metrics linguistic-quality

Python 3 License: MIT

GRUEN for Evaluating Linguistic Quality of Generated Text

This repo is the GRUEN metric implementation of GRUEN for Evaluating Linguistic Quality of Generated Text (Findings of EMNLP 2020).

Table of Contents

Introduction

GRUEN aims at evaluating the linguistic quality of generated text from machine learning models. Specifically, it aims to capture the four linguistic dimensions in Table 1.

GRUEN has been shown to correlate well with human judgments on 13 datasets over the five natural language generation tasks below:

Code:

The code is based on Python 3. 1) Install the dependencies as below:

    pip install -r requirements.txt
or using conda environment file:
```
conda env create --file environment.yml
```

2) Use shell inscript to download CoLa models.
chmod u+x install.sh & ./install.sh

3) Run main.py for an example usage.
python -m main

Dataset

It is critically important to collect human judgments (i.e., the manual linguistic quality annotation score) of the system output.

To ease future research on proposing novel evaluation metrics, we summarize some benchmark datasets below. For license issues, we are unable to provide links for downloading the data and the human judgments. We, however, point out how you can access them.

Abstractive Text Summarization:

Machine Translation:

Dialogue System:

Text Simplification:

Text Compression:

Related Papers

Citation

If you find this repo useful, please cite:

@inproceedings{zhu2020gruen,
  title={GRUEN for Evaluating Linguistic Quality of Generated Text},
  author={Zhu, Wanzheng and Bhat, Suma},
  booktitle={Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: Findings},
  pages={94--108},
  year={2020}
}