aai-institute / continuiti

Learning function operators with neural networks.
GNU Lesser General Public License v3.0
19 stars 3 forks source link

Fix: Apply Uniform Attention Masks Explicitly #152

Open JakobEliasWagner opened 1 month ago

JakobEliasWagner commented 1 month ago

Fix: Apply Uniform Attention Masks Explicitly

Description

This PR changes the attention base class implementation to an explicit assumption that the attention mask is applied uniformly accross a single sample. This ensures consistency in which parts of the input are masked out or attended to across all queries. Without this assumption the mask needs to be repeated for every query, leading to big overheads. In its current state all queries should have acess to all key/value pairs. While there is potential impplementations utilizing a more efficient/distributed matching, this approach will be not followed for now.

Which issue does this PR tackle?

How does it solve the problem?

How are the changes tested?

Checklist for Contributors

Checklist for Reviewers: