Open GianlucaMancusi opened 4 months ago
Thanks!!! That's a good paper. My takeaways are that it might be necessary to increase the hidden dim for the model to memorize the prev tokens when sequences get longer.
I am gonna leave this issue open for discussion of the paper.
I believe it is essential also to comprehend the limits of Mamba. This paper shows that any state-space model fails to solve the copy task unless its latent state grows linearly with the sequence length: https://arxiv.org/pdf/2402.01032.pdf