openai / gpt-2

Code for the paper "Language Models are Unsupervised Multitask Learners"
https://openai.com/blog/better-language-models/
Other
22.57k stars 5.53k forks source link

GPT-2 implementation problem #334

Open sanhai77 opened 7 months ago

sanhai77 commented 7 months ago

"Hi, I am reading the GPT-2 paper and encountering a problem with the following phrase related to implementation:

'A modified initialization method is used to account for the accumulation on the residual path with model depth. We scale the weights of residual layers at initialization by a factor of 1/√N, where N is the number of residual layers.'

My problem is that we normalize after accumulation (addition then normalization). So, why do we need to scale weights? Aren't we doing this to reduce the impact of accumulation?"

arregit commented 7 months ago

This is an archived repository. I don't think this is the best place to ask. Good luck!