gsh199449 / read-paper

26 stars 3 forks source link

A Reinforced Topic-Aware Convolutional Sequence-to-Sequence Model for Abstractive Text Summarization #33

Open gsh199449 opened 6 years ago

gsh199449 commented 6 years ago

title

A Reinforced Topic-Aware Convolutional Sequence-to-Sequence Model for Abstractive Text Summarization

notes

做生成式短文本摘要,在DUC, Gigaword和LCSTS打到stoa。方法是用ConvS2S作为基础,在attention的时候不仅attend到原文的词,还attend到topic的词。其中topic是预先用LDA生成的词。在训练的时候使用RL优化ROUGE,详细说了为啥要用RL,比MLE好在哪。MLE一是训练的时候只让模型暴露在真实数据中,而没有把自己的输出给下一步,而测试的时候是用自己的输出。二是优化MLE只有输出的和真实值一模一样的时候才行,及时语义一致都不行。而有多个reference的时候用ROUGE做reward却可以。

bibtex

@inproceedings{Wang2018ART, title={A Reinforced Topic-Aware Convolutional Sequence-to-Sequence Model for Abstractive Text Summarization}, author={Li Wang and Junlin Yao and Yunzhe Tao and Li Zhong and Wei Liu and Qiang Du}, booktitle={IJCAI}, year={2018}, pages={4453--4460}, }

link

https://www.ijcai.org/proceedings/2018/0619.pdf

publication

IJCAI 2018 long accepted

open source

No

affiliated

Tencent SNG, AI Lab

Wanghn95 commented 5 years ago

666