a1da4 / paper-survey

Summary of machine learning papers
32 stars 0 forks source link

Reading: Alibaba Submission for WMT18 Quality Estimation Task #33

Open a1da4 opened 5 years ago

a1da4 commented 5 years ago

0. Paper

@inproceedings{wang-etal-2018-alibaba, title = "{A}libaba Submission for {WMT}18 Quality Estimation Task", author = "Wang, Jiayi and Fan, Kai and Li, Bo and Zhou, Fengming and Chen, Boxing and Shi, Yangbin and Si, Luo", booktitle = "Proceedings of the Third Conference on Machine Translation: Shared Task Papers", month = oct, year = "2018", address = "Belgium, Brussels", publisher = "Association for Computational Linguistics", url = "https://www.aclweb.org/anthology/W18-6465", doi = "10.18653/v1/W18-6465", pages = "809--815", }

1. What is it?

They proposed a strong QE model, QE-Brain which achieved No.1 results into word/sentence level QE in WMT18.

2. What is amazing compared to previous studies?

They used Bi-directional Transformer LM and Bi-LSTM. Moreover, they proposed objective function to make use of extracted features.

3. Where is the key to technologies and techniques?

Architectures

Strategies

スクリーンショット 2019-10-29 22 32 33

h is the reference HTER score, w is vector, h(→) and h(←) are the hidden states of bi-directional LSTM.

But they proposed a new objective function using 17 features f from baseline, QuEST++.

スクリーンショット 2019-10-29 22 29 57

4. How did validate it?

Tried word and sentence QE tasks. They achieved SoTA in both tasks.

5. Is there a discussion?

6. Which paper should read next?

OpenKiwi

Automatic Post-Editing

a1da4 commented 5 years ago

28 OpenKiwi

a1da4 commented 5 years ago

34 Automatic Post-Editing