kyegomez / xLSTM

Implementation of xLSTM in Pytorch from the paper: "xLSTM: Extended Long Short-Term Memory"
MIT License
75 stars 6 forks source link

Forget gate bias should probably be initialized to 1 #1

Open twoletters opened 1 month ago

twoletters commented 1 month ago

https://github.com/kyegomez/xLSTM/blob/020209fd7c156852a12a82d1bb21ce4a11309fc0/xlstm_torch/main.py#L52C9-L52C33

The training of traditional LSTMs benefits from initializing the forget gate bias to 1. It prevents the LSTM from forgetting until it has learned to do so, speeding up training.

It seems to me that sLSTM is essentially the same as the traditional LSTM in that regard, and initializing the forget gate biases to 1 should speed up training. Don't take my word for it, though. Test, don't trust.

github-actions[bot] commented 1 month ago

Hello there, thank you for opening an Issue ! 🙏🏻 The team was notified and they will get back to you asap.

twoletters commented 1 month ago

I amend my comment: this is useful only if the sigmoid is used as the activation function of the forget gate (one proposed option in the paper). If the exponential is used, the forget gate will be close to 1 if the parameters are close to zero.