Open JakobEliasWagner opened 4 months ago
This pull request introduces the implementation of the Heterogeneous Normalized Attention mechanism as described in the paper Hao et al., 2023.
The heterogeneous normalized attention block calculates the attention scores in these steps:
$$\tilde{q}_i = Softmax(q_i)$$
$$\tilde{k}_i = Softmax(k_i)$$
$$z_t = \sum_i \frac{\tilde{q}_t \tilde{k}_i}{\sum_j \tilde{q}_t \tilde{k}_j}v_i$$
This implementation is linear with respect to the sequence length.
We added a masking mechanism to the vanilla implementation suggested by Hao et al.
HeterogeneousNormalizedAttention
feature/title-slug
Bugfix: Title
Feature: Heterogeneous Normalized Attention
Description
This pull request introduces the implementation of the Heterogeneous Normalized Attention mechanism as described in the paper Hao et al., 2023.
The heterogeneous normalized attention block calculates the attention scores in these steps:
$$\tilde{q}_i = Softmax(q_i)$$
$$\tilde{k}_i = Softmax(k_i)$$
$$z_t = \sum_i \frac{\tilde{q}_t \tilde{k}_i}{\sum_j \tilde{q}_t \tilde{k}_j}v_i$$
This implementation is linear with respect to the sequence length.
We added a masking mechanism to the vanilla implementation suggested by Hao et al.
Which issue does this PR tackle?
How does it solve the problem?
HeterogeneousNormalizedAttention
, a linear attention implementation.HeterogeneousNormalizedAttention
.How are the changes tested?
Checklist for Contributors
feature/title-slug
convention.Bugfix: Title
convention.Checklist for Reviewers: