Repository to track the progress in Natural Language Processing (NLP), including the datasets and the current state-of-the-art for the most common NLP tasks.
I noticed that some of the results reported in the WMT 2014 EN-DE table are obtained by models trained on data from newer WMT datasets (but they report results on newstest2014), e.g, Edunov et al. (2018) uses WMT’18 and Wu et al. (2019) uses WMT’16 for training.
The few results on WMT 2014 EN-FR that i checked were fine though. Here are the papers i checked
I noticed that some of the results reported in the WMT 2014 EN-DE table are obtained by models trained on data from newer WMT datasets (but they report results on newstest2014), e.g, Edunov et al. (2018) uses WMT’18 and Wu et al. (2019) uses WMT’16 for training.
The few results on WMT 2014 EN-FR that i checked were fine though. Here are the papers i checked