Closed clairehua1 closed 1 year ago
Hi @clairehua1,
You should avoid comparing scores between languages and even between domains. This is not just for COMET but for any MT Metric.
For example BLEU, even tho is lexical, highly depends on the underlying tokenizer thus the results vary a lot between different languages.
PS: even human annotation has a lot of variability between languages and domains. If we want reliable and comparable results we need to make sure the test conditions are the same (same data, same annotators)
Cheers, Ricardo
Thanks for the answer Ricardo! Is there a way to interpret the COMET score other than using it as a ranking system?
@clairehua1 for a specific setting (language pair and domain) you could plot the distribution of scores and analyse it by looking at quantiles. The scores usually follow a normal distribution.
To give a bit more context most models are trained to predict a z-normalized direct assessment (a z-score). Z-scores have a mean at 0 and follow a normal distribution which means that ideally a score of 0 should represent an average translation.
In practise the distribution of scores (for the default models wmt20-comet-da
) is slightly skewed towards positive scores which means that an average translation is usually assigned a score of 0.5. I have an explanation here
In the plots above you can see how different is the scores between English-German and English-Hausa. But you can see that the "peak" for German is a bit higher than Hausa.
Nonetheless this is expected due to the fact that German translations tend to have better quality than Hausa ones.
❓ Questions and Help
Before asking:
What is your question?
Code
#### What have you tried? #### What's your environment? - OS: [e.g. iOS, Linux, Win] - Packaging [e.g. pip, conda] - Version [e.g. 0.5.2.1]