lessw2020 / Ranger-Deep-Learning-Optimizer

Ranger - a synergistic optimizer using RAdam (Rectified Adam), Gradient Centralization and LookAhead in one codebase
Apache License 2.0
1.19k stars 176 forks source link

Did you try to fine-tune transformers LM with Ranger? #13

Open avostryakov opened 5 years ago

avostryakov commented 5 years ago

Recent transformers architectures are very famous in NLP: BERT, GPT-2, RoBERTa, XLNET. Did you try to fine-tune them on some NLP task? If so, what was the best Ranger hyper-parameters and learning rate scheduler?

LifeIsStrange commented 4 years ago

Testing for XLnet should be prioritarised as it is the current best state of the art. ERNIE 2.0 would be interesting too.

JohnGiorgi commented 4 years ago

@avostryakov I tried fine-tuning a BERT based model for joint NER and relation classification. It performs about ~1.5% worse for my tasks than the AdamW implementation in Transformers:

AdamW

Ranger

It is possible that with more tuning I might be able to close the gap. If anyone else has any tips for fine-tuning BERT with Ranger, please let me know!

lessw2020 commented 4 years ago

I'm working with DETR which is object detection with transformer internally and will test it out there soon.
Note that Ranger now has GC (gradient centralization) and will be interesting to see if that helps for transformers.

hiyyg commented 12 months ago

How does ranger perform for Detr?