huggingface / transformers

🤗 Transformers: State-of-the-art Machine Learning for Pytorch, TensorFlow, and JAX.
https://huggingface.co/transformers
Apache License 2.0
132.08k stars 26.31k forks source link

Implementing efficient self attention in T5 #10612

Open JamesDeAntonis opened 3 years ago

JamesDeAntonis commented 3 years ago

🌟 New model addition

My teammates and I (including @ice-americano) would like to use efficient self attention methods such as Linformer, Performer and Nystromformer

Model description

These new methods serve as approximations of regular attention, but reduce complexity from quadratic in the inputs to linear. We would like to add a parameter to T5 where users can specify an efficient attention method to use instead of regular attention. Ideally, this would be implemented across all models, but the models tend to have varying implementations of attention, rendering this generalization fairly tedious.

Open source status

NielsRogge commented 3 years ago

There are already some PRs regarding these models, I'm working on adding the Linformer (#10587), there's also a PR for the Performer (#9325, see further down the thread - people can already train T5 with Performer).