This paper proposes a graph transforming encoder that can transform a scientific knowledge graph into natural language text with the help of a seed sentence. They limited their work in the domain of scientific text by generating scientific paper abstracts from corresponding titles and knowledge graphs. For the evaluation purpose, they contributed a new dataset named AGENDA that contains 40K paper titles and abstracts taken from 12 top AI conferences. They outperformed the baseline methods both in BLEU and METEOR scores. They also performed a human evaluation (best-worst scaling) with human authored text and the Rewriter tool only. Presumably, GraphWriter won.
Contribution
Proposed a new graph transformer encoder that applies the successful sequence transformer to graph-structured inputs
Showed how IE output can be formed as a connected unlabeled graph for use in attention-based encoders
Provided a large dataset of knowledge graphs paired with scientific texts for further study
Comment
After replication, I reproduced their BLUE score. But, the METEOR score was way below.
Their closest contender was GAT. I would like to see a human evaluation with respect to GAT.
Publication: ACL Anthology Authors: Koncel-Kedziorski, Rik Bekal, Dhanush Luan, Yi Lapata, Mirella Hajishirzi, Hannaneh
Summary
This paper proposes a graph transforming encoder that can transform a scientific knowledge graph into natural language text with the help of a seed sentence. They limited their work in the domain of scientific text by generating scientific paper abstracts from corresponding titles and knowledge graphs. For the evaluation purpose, they contributed a new dataset named AGENDA that contains 40K paper titles and abstracts taken from 12 top AI conferences. They outperformed the baseline methods both in BLEU and METEOR scores. They also performed a human evaluation (best-worst scaling) with human authored text and the Rewriter tool only. Presumably, GraphWriter won.
Contribution
Comment