@inproceedings{ive-etal-2018-deepquest,
title = "deep{Q}uest: A Framework for Neural-based Quality Estimation",
author = "Ive, Julia and
Blain, Fr{\'e}d{\'e}ric and
Specia, Lucia",
booktitle = "Proceedings of the 27th International Conference on Computational Linguistics",
month = aug,
year = "2018",
address = "Santa Fe, New Mexico, USA",
publisher = "Association for Computational Linguistics",
url = "https://www.aclweb.org/anthology/C18-1266",
pages = "3146--3157",
}
1. What is it?
They proposed 2 QE models
sentence-level: comparable to SoTA model (POSTECH, WMT2017) in sentence QE
document-level: first approach in neural method, and outperformed POSTECH
2. What is amazing compared to previous studies?
Their model is simple and lighter than POSTECH.
They first to propose document QE model without using just average sentence QE scores.
3. Where is the key to technologies and techniques?
Intro
Recent works are trying to update the SoTA method, POSTECH.
POSTECH has 2 systems, predictor and estimator.
This method requires a large amount of data and training, so some works tried to alternate the predictor model. (This work is also)
their model
Sentence-level(left)
predictor: use 2 Bi-directional RNNs (for source and MT output sentences each)
This system weighting words in each sentences.
estimator: use attention mechanism to calculate the score from weights.
Document-level(right)
predictor: use Bi-directional RNN, input are sentence representations (source and MT output sentences)
estimator: use attention mechanism to calculate the score from weights.
4. How did validate it?
Used Sentence-level and Document-level QE datasets.
Sentence-level
They achieved a comparable score to the SoTA model, POSTECH.
Document-level
They outperformed the POSTECH.
This means that Document QE does not need average of sentence-level QE scores.
0. Paper
@inproceedings{ive-etal-2018-deepquest, title = "deep{Q}uest: A Framework for Neural-based Quality Estimation", author = "Ive, Julia and Blain, Fr{\'e}d{\'e}ric and Specia, Lucia", booktitle = "Proceedings of the 27th International Conference on Computational Linguistics", month = aug, year = "2018", address = "Santa Fe, New Mexico, USA", publisher = "Association for Computational Linguistics", url = "https://www.aclweb.org/anthology/C18-1266", pages = "3146--3157", }
1. What is it?
They proposed 2 QE models
2. What is amazing compared to previous studies?
Their model is simple and lighter than POSTECH. They first to propose document QE model without using just average sentence QE scores.
3. Where is the key to technologies and techniques?
Intro
Recent works are trying to update the SoTA method, POSTECH. POSTECH has 2 systems, predictor and estimator. This method requires a large amount of data and training, so some works tried to alternate the predictor model. (This work is also)
their model
Sentence-level(left)
predictor: use 2 Bi-directional RNNs (for source and MT output sentences each) This system weighting words in each sentences.
estimator: use attention mechanism to calculate the score from weights.
Document-level(right)
predictor: use Bi-directional RNN, input are sentence representations (source and MT output sentences)
estimator: use attention mechanism to calculate the score from weights.
4. How did validate it?
Used Sentence-level and Document-level QE datasets.
Sentence-level
They achieved a comparable score to the SoTA model, POSTECH.
Document-level
They outperformed the POSTECH. This means that Document QE does not need average of sentence-level QE scores.
5. Is there a discussion?
6. Which paper should read next?
OpenKiwi