Closed qute012 closed 3 years ago
Hi,
Thanks for the question. I think there is some misunderstanding here. The sentence does not express the opinion that LSTM doesn't have a long-range distance. What I want to emphasize is that sequence models are not the best option if you want to capture long-range dependencies.
ps, although LSTM is better than RNNs to capture those long-range dependencies, they also show significant locality bias( (Lai et al., 2015; Linzen et al., 2016). You can also see other discussions about the long-range dependencies issue (zhang et al., 2018, shen et al., 2019).
Reference
I got you! I understood as that before explain. I just want to know surely. I think it can be explained as explicit capturing dependency.
Thanks to reply.
Thank you for great works!
However, i'm confused in your paper. This paragraph says LSTM doesn't have long-range distance. But i can't find describing that in Learning Deep Architectures for AI that you referred. It just explains about RNN not LSTM. Can you explain about that?