andrewgcodes / xlstm

my attempts at implementing various bits of Sepp Hochreiter's new xLSTM architecture
MIT License
111 stars 8 forks source link

Forget gate bias should probably be initialized to 1 #3

Open twoletters opened 1 month ago

twoletters commented 1 month ago

https://github.com/andrewgcodes/xlstm/blob/f0f54bf9794eb83ea181ada9dc55ce500da9688f/mLSTM.ipynb#L66

The training of traditional LSTMs benefits from initializing the forget gate bias to 1. It prevents the LSTM from forgetting until it has learned to do so, speeding up training.

It seems to me that sLSTM is essentially the same as the traditional LSTM in that regard, and initializing the forget gate biases to 1 should speed up training. Don't take my word for it, though. Test, don't trust.

twoletters commented 1 month ago

I amend my comment: this is useful only if the sigmoid is used as the activation function of the forget gate (one proposed option in the paper). If the exponential is used, the forget gate will be close to 1 if the parameters are close to zero.