openai / gpt-3

GPT-3: Language Models are Few-Shot Learners
https://arxiv.org/abs/2005.14165
15.69k stars 2.3k forks source link

Improve your state of the art by using best activation function and best meta optimizer #2

Open LifeIsStrange opened 4 years ago

LifeIsStrange commented 4 years ago

You could increase GPT 3 accuracy by using Ranger, which combine state of the art optimizers + gradient centralization https://github.com/lessw2020/Ranger-Deep-Learning-Optimizer You seem to be using the Adam optimizer. It has been succeeded by RAdam (rectified Adam). Ranger will bring you this improvment and a lot more synergistic others, for free.

Hortogonally, you would probably benefit from Mish too instead of the one you use (Relu ?) but should be tested after Ranger as it could regress accuracy (even if unlikely) https://github.com/digantamisra98/Mish

minimaxir commented 4 years ago

At the level these models are trained at, using a specific optimizer/activation will not necessarily get you better results.

digantamisra98 commented 4 years ago

Additionally considering GPT-3 size I would suggest not to use any optimizer above SGD because of the computation levels. Same goes for Mish.

LifeIsStrange commented 4 years ago

@minimaxir it will not necessarily bring gains but it is still a low hanging fruit that should be tried.

LifeIsStrange commented 4 years ago

@digantamisra98 RAdam (not the full Ranger package) does not increase computational cost.

I've read somewhere that Mish can be as efficient as Relu Maybe with https://github.com/thomasbrandon/mish-cuda?

digantamisra98 commented 4 years ago

@LifeIsStrange everything above SGD is expensive.