Closed burcehan closed 4 years ago
Hi,
Thanks for the good words!
The feature map needs to correspond to a non-negative similarity score. That means any <φ(x), φ(y)> >= 0 for all x, y would be usable. Section 3.2.1 in the paper has a few more details and discussion regarding that matter.
Cheers, Angelos
I use function elu(x)+1 in the model,I found that the convergence was very slow ,Whether choosing another mapping function can accelerate the convergence? Which mapping function is better? Thanks for your help.
I would say this is an open research problem. In some cases we observed the same (see supplementary section B) in others not. Increasing the dimensions of the query and key (or the number of layers) would make the model slower but increase its representation capacity which might improve the convergence.
For now it depends on the problem, problems that require sparse attention patterns might be harder to learn using this feature map.
I am closing this issue since it is not a bug or a new feature.
Feel free to reopen it if you have more questions.
Hi, Thanks for your great work! I have some questions, Why choose elu(x) + 1 as the feature map function,Is it suitable for sequences of different lengths? What conditions does the feature map function need to meet? Thanks for your help.