stanfordnlp / GloVe

Software in C and data files for the popular GloVe model for distributed word representations, a.k.a. word vectors or embeddings
Apache License 2.0
6.81k stars 1.51k forks source link

Step size and gradient clipping for bias terms #209

Open ErinGeorge opened 1 year ago

ErinGeorge commented 1 year ago

I added processing on the updates for the bias terms of the word vectors to mirror the other updates. Without these, the eta and grad-clip parameters do not function as described, and the loss function minimized is not quite the one that appears in the original paper.

In personal experiments, this does not seem to affect the final output of the code noticeably in most cases. It appears to only matter in certain edge cases where the original code fails to converge, such as when the co-occurence matrix contains many entries between 0 and 1.0.

AngledLuffa commented 1 year ago

What's kind of weird about this is that by missing the eta term in the original bias calculation, we've effectively made the learning rate for the bias 200x the default learning rate for the rest of the parameters. Our first couple experiments with this on rebuilding English word vectors w/ and w/o this change make it appear that the new learning rate is making the results worse. We'll dig into it some more and check if there's a way to scale eta for the bias in such a way that the vectors are coming out better