a1da4 / paper-survey

Summary of machine learning papers
32 stars 0 forks source link

Reading: Improving Back-Translation with Uncertainty-based Confidence Estimation #9

Open a1da4 opened 5 years ago

a1da4 commented 5 years ago

0. Paper

@inproceedings{wang-etal-2019-improving-back, title = "Improving Back-Translation with Uncertainty-based Confidence Estimation", author = "Wang, Shuo and Liu, Yang and Wang, Chao and Luan, Huanbo and Sun, Maosong", booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP)", month = nov, year = "2019", address = "Hong Kong, China", publisher = "Association for Computational Linguistics", url = "https://www.aclweb.org/anthology/D19-1073", doi = "10.18653/v1/D19-1073", pages = "791--802", } My literature review slide (in Japanese) [speakerdeck]

1. What is it?

In this paper, they proposed the method to qualify the confidence value for pseudo-data from back-translation. They used (sentence-level | word-level) confidence score

2. What is amazing compared to previous studies?

They calculated the uncertainty score by using the confidence value.

スクリーンショット 2019-09-09 13 11 47

3. Where is the key to technologies and techniques?

There are 3 types of uncertainty,

They focused on model uncertainty using Monte Carlo dropout sampling.

4. How did validate it?

In MT, they calculated the BLEU score for each method of confidence value.

スクリーンショット 2019-09-09 12 59 08

As above, the method CEV is the best,

After that, they achieve the better the score of Back-Translation using uncertainty-based confidence method than the Neural Quality Estimation method. 

スクリーンショット 2019-09-09 13 07 54

5. Is there a discussion?

The big difference from QE is that their system does not need any other models.

6. Which paper should read next?

Uncertainty based study

a1da4 commented 5 years ago

19

a1da4 commented 5 years ago

21