issues
search
tomaarsen
/
attention_sinks
Extend existing LLMs way beyond the original training length with constant memory usage, without retraining
https://huggingface.co/blog/tomaarsen/attention-sinks
Apache License 2.0
650
stars
41
forks
source link
Add cotributing.md
#9
Open
rajveer43
opened
10 months ago