Abstract
This work elicits LLMs’ inherent ability to handle
long contexts without fine-tuning. The limited
length of the training sequence during training
may limit the application of Large Language Models (LLMs) on long input sequences for inference.
In this work, we argue that existing LLMs themselves have inherent capabilities for handling long
contexts. Based on this argument, we suggest extending LLMs’ context window by themselves to
fully utilize the inherent ability.We propose SelfExtend to stimulate LLMs’ long context handling
potential. The basic idea is to construct bi-level
attention information: the group level and the
neighbor level. The two levels are computed by
the original model’s self-attention, which means
the proposed does not require any training. With
only four lines of code modification, the proposed
method can effortlessly extend existing LLMs’
context window without any fine-tuning. We conduct comprehensive experiments and the results
show that the proposed method can effectively
extend existing LLMs’ context window’s length.
Motivation
Rope Scaling alternative for longer contexts
Your contribution
Will give it a try within the next days
Assigning the issue to myself
Feature request
https://arxiv.org/pdf/2401.01325.pdf
Abstract This work elicits LLMs’ inherent ability to handle long contexts without fine-tuning. The limited length of the training sequence during training may limit the application of Large Language Models (LLMs) on long input sequences for inference. In this work, we argue that existing LLMs themselves have inherent capabilities for handling long contexts. Based on this argument, we suggest extending LLMs’ context window by themselves to fully utilize the inherent ability.We propose SelfExtend to stimulate LLMs’ long context handling potential. The basic idea is to construct bi-level attention information: the group level and the neighbor level. The two levels are computed by the original model’s self-attention, which means the proposed does not require any training. With only four lines of code modification, the proposed method can effortlessly extend existing LLMs’ context window without any fine-tuning. We conduct comprehensive experiments and the results show that the proposed method can effectively extend existing LLMs’ context window’s length.
Motivation
Rope Scaling alternative for longer contexts
Your contribution
Will give it a try within the next days Assigning the issue to myself