Separius / awesome-fast-attention

list of efficient attention modules
GNU General Public License v3.0
988 stars 109 forks source link

Memory compressed attention #1

Closed lucidrains closed 4 years ago

lucidrains commented 4 years ago

I have the memory compressed attention from the "Generating Wikipedia" paper https://github.com/lucidrains/memory-compressed-attention . Also, wanted to let you know there is a more complete implementation of linformer by Peter here https://github.com/tatp22/linformer-pytorch Thank you for compiling this!