Unbabel / OpenKiwi

Open-Source Machine Translation Quality Estimation in PyTorch
https://unbabel.github.io/OpenKiwi/
GNU Affero General Public License v3.0
229 stars 48 forks source link

Questions about prediction results and evaluate module #27

Closed HXX97 closed 5 years ago

HXX97 commented 5 years ago

Hi! Thank you for the work you have done.

I'm working with the Predictor-Estimator model and I successfully trained the predictor and estimator model on both word level and sentence level. But I find the results a little bit different: tags for words are in the type of probability.

I wonder how to transfer the probability into a binary tag like OK/BAD? If there is a threshold, how much is it?

Besides, I found that if I train the Estimator with both word and sentence data, it would produce both tags of words and scores of sentences. Then I evaluated them with kiwi, the result board shows that the score drafted from tags are much more better. May I ask how does it calculate score of sentence from tags of words? Is it simple average? But I didn't set the property --sents-avg.

Thank you so much!

trenous commented 5 years ago

Hello HITHXX,

As you noticed the models in OpenKiwi predict a probability distribution over word tags. To transform these to tags, the most obvious decision threshold would be 0.5

The class ThresholdCalibrationMetric chooses the threshold to optimize for the F1 metric. We found in practice that tuning the bad-weight option for F1 score has a similar effect as a posteriori calibration though - so if you tuned your bad-weight, I would not expect any gains from the calibration (Actually the implementation of threshold calibration also has some issues that make it instable, but that's another topic)

The sents-avg option seems to be redundant at this point, you do not need to pass it. Word level probabilities are transformed into sentence scores by simple averaging, word level tags are first turned into 'bad-probabilities' of 0 for OK and 1 for BAD, then averaged. We usually (but not always) observed gains of 1-3 Pearson from training sentence level directly as opposed to averaging word level tags. To achieve good performance, it is crucial that you pass the flag sentence-ll.

Could you let us now the datasets you used (for pretraining and QE training) as well as the output of the evaluate pipeline?

Best, Sony

HXX97 commented 5 years ago

Hello Sony:

Greatly appreciated your reply and attention. Now I know how to deal with the probability distributions.

I'm training the predictor with zh-en parallel corpora provided by CWMT 2018. As to QE training, datasets are provided by CCMT 2019.

I also tried passing the sentence-ll flag as you advised when training estimator, and the result is much more better. This is interesting and I'll try to figure out how it happens.

Anyway, thank you very much. You really did me a favor :)

trenous commented 5 years ago

Happy to Help :) I will change the default value of sentence-ll to True, it is unintuitive that you have to set the flag to get decent results. I will close the issue for now, but feel free to re-open it, or open a new one, if you have further questions.