Unbabel / COMET

A Neural Framework for MT Evaluation
https://unbabel.github.io/COMET/html/index.html
Apache License 2.0
441 stars 72 forks source link

[QUESTION] Comet kiwi architecture #216

Closed vince62s closed 1 month ago

vince62s commented 2 months ago

My understanding based on this image and this: https://github.com/Unbabel/COMET/blob/master/comet/models/multitask/unified_metric.py#L473

is that for wmt22(wmt23-cometkiwi models you take only the first token as the sentence embedding to compute the score through the layerwise + feedforward layers.

This setting in the hparams is a bit confusing: https://huggingface.co/Unbabel/wmt23-cometkiwi-da-xl/blob/main/hparams.yaml#L30

However what triggered the choice of first token vs average pooling in the case of comet kiwi ?

Thanks

ricardorei commented 2 months ago

You are right. The diagram is correct but the hparams are confusing because that flag is actually not used for this model.

Contrary to RegressionMetric models where the different pooling options influence the sentence embedding computation, in UnifiedMetric models. we always use the same pooling technique (The CLS token)

ricardorei commented 2 months ago

The config in the YAML is just there because all classes inherit from CometModel

vince62s commented 2 months ago

but why ? does it spit a proper sentence score if we average all tokens embeddings ?

ricardorei commented 2 months ago

This was something I did a couple of tests and it was not worth it for models where we perform cross-encoding. With a model like cometkiwi where target and source are encoded together and the self-attention can look at the same time to both sentences, the representations that are captured in the CLS token are superior than performing average pooling across the entire input.

Another thing we tried was to just gather the embeddings of the target (which already received attention from the source) and average those... the result is very similar to use the CLS token only and it complicates code a bit because you have to keep track of the separator tokens in the middle of the sentence. So the decision was based on performance and simplicity...

This is not the case for other models where there is no attention between sentences... for those models we saw benefits in doing average pooling. Btw our experiments seem to validate some finding from retrieval tasks where there is this long debate about cross-encoding vs dual encoding with average pooling.

vince62s commented 2 months ago

last ablation question. did you try the same arch without the layerwise_attention ? does it bring a lot ?

ricardorei commented 2 months ago

I did, it's basically the same performance.

For some tasks different layers can give you different results and some layers might be better than others. The idea behind using the layerwise_attention was to reduce the need of that search when doing hyper-parameter search and I found out it worked well... additionally, we could eventually prune top layers if needed but we end up not doing it. We describe the layer pruning here.

Anyway, training a model without the layerwise_attention will eventually lead to similar results and its not an absolute need.

vince62s commented 2 months ago

thanks for this.

Another question: I scored a dataset with cometkiwi-XL and I trained a xlm-roberta-large based on the dataset / scores from cometkiwi-XL

It barely improves the "original" wmt22-cometkiwi-da model.

It means that it is quite difficult to distillate the cometkiwi-XL to a smaller model. did you observe the same?

ricardorei commented 1 month ago

Yes, it's hard to distil an XXL/XL model into a large model.... I believe this is the case because the large model is already close to the XL and XXL models. There is not a lot of improvements with scale.

I had a student working on distillation that had nice results distilling XL/XXL into a model based on MiniLM V2. The resulting model is fast and has good performance... Its a bit better than training with the annotations from WMT

vince62s commented 1 month ago

hmm I am surprised in my tests XL is much better than Large. I have not tested XXL but based on your last paper it seems marginal improvement to XL.

ricardorei commented 1 month ago

It's true. The improvements from XL to Large are marginal. You notice a bit bigger improvements when going to XXL but for it's size, the improvements are not that big. I think this is the case because InfoXLM is a really strong encoder for its size. XLM-R XL and XXL are also undertrained for their size. They have not been trained enough.... unfortunately no one seems to be interested in training large multilingual encoders anymore

vince62s commented 1 month ago

I think we are not saying the same thing:)