Unbabel / COMET

A Neural Framework for MT Evaluation
https://unbabel.github.io/COMET/html/index.html
Apache License 2.0
493 stars 76 forks source link

Do system scores above 100 really "differ"? #128

Closed BramVanroy closed 1 year ago

BramVanroy commented 1 year ago

I guess that this question comes down to a discussion that we had earlier about scaling and differences across languages and domains.

I came across this commercial blog post of Lilt where they make claims of outperforming Google/GPT4 in specific domains. They use COMET to evaluate.

COMET scores LILT

As you can see, these scores for the Manufcaturing domain exceed 100. I wonder how this should be interpreted and whether COMET was intended to produce >100 scores. I know that the model can predict such scores (it's just unbounded regression), but I would intuitively think that 100 is the maximum and that higher scores are not meaningful. So is the difference between 101 and 106 in this example meaningful (comparable to e.g. 91 and 96 difference) or is it quite meaningless and just means "the model predicts +100 so both systems are equally good"?

Thanks for any insights!

ricardorei commented 1 year ago

Hey @BramVanroy if you look at z-scores used to train COMET (the wmt20-comet-da) there are some outliers that go well over 1, in other words, for that model, its possible to have scores over 1.

From "empirical" experience the model does not output scores > 1.0 very often and when it does its usually because the domain is easy with many short segments or your model just overfitted a specific domain. At Unbabel sometimes we came across such COMET values when we do domain adaptation for a domain where content repeats itself a lot.

Nonetheless, I also came across that blogpost and I think its just publicity I would not rely much on its scientific value. The baseline scores are also very high which ints a domain where content is not difficult and if you give some examples of translations in that domain models will easily learn to produce perfect translations. They explicitly talk about a a new GPT-style model and then they write that the model is 1000x smaller than GPT-4. What does that mean? First of all the GPT-4 parameters are not disclosed and assuming the size of GPT-3 (175B parameters) then 1000x less parameters is ~the size of a Transformer big model that is commonly used for translation. Nonetheless its great that they are using in-context learning and there is a lot of value in those approaches for MT that can differentiate companies from generic MT like google but I would not take the results very seriously (from a scientific perspective).

ricardorei commented 1 year ago

Btw for the new model wmt22-comet-da its much much less common to have scores over 1.0. because the training data was scaled between 0 and 1.

BramVanroy commented 1 year ago

Thanks for the response Ricardo! I agree with your observations about the blog post - it did not contain much useful information in the sense that technical details are missing. That being said, in-context learning/prompt translation does seem like a fruitful prospect in the months and years to come.

BramVanroy commented 1 year ago

Sorry for bumping this again @ricardorei, but I am experiencing the "opposite" now with very low, negative scores. I was translating some WMT data with gpt-3.5 and got this translation back. Curiously, its COMET score (with the cometinho checkpoint) -1.034. That's before *100, so a very low score.

from comet import load_from_checkpoint, download_model

if __name__ == "__main__":
    data = [{
        "src": "There's mask-shaming and then there's full on assault.",
        "ref": "Masken-Shaming ist eine Sache, Körperverletzung eine andere.",
        "mt": "Es gibt Maskenscham und dann gibt es den vollen Angriff."
    }]
    model_path = download_model("eamt22-cometinho-da")
    model = load_from_checkpoint(model_path)

    seg_scores, sys_score = model.predict(data, batch_size=8, gpus=0)
    print(seg_scores, sys_score)

My worry is a bit similar to before: sentence scores like these greatly impact the system scores, which makes me wonder whether it makes sense to ReLU the score, or even sigmoid them. If I remember correctly that it is what you do in the new models, is that correct? If so, would it makes sense to do that when using the older models as well?