Update: We find that NTK method mentioned in this Reddit post outperforms Position Interpolation up to a context size of at least 6K. Thus, with replace the implementation of PI with NTK method.
In addition, we use an empirical formula to set $\alpha$ adaptively given the input size, so that we could avoid hyperparameter tuning, and the method can be applied to different context sizes.
The following is the perplexity of Chinese-LLaMA-Plus-7B on a test set:
Context size
512
1024
2048
3072
4096
5120
6144
baseline
11.4
10.98
10.98
173.5
-
-
-
Position Interpolation
11.4
10.98
10.98
11.47
12.42
14.44
17.86
Adaptive NTK (this PR)
11.4
10.98
10.98
11.05
11.05
11.40
12.57
Even though Chinese-LLaMA-Plus-7B has been trained with input_length of 512, its context size can be extend to 5K~6K without significantly increasing the perplexity
Users only need to add the following lines to the beginning of the python code:
We find that the method can be used out-of-the box even without training the model with long context size.
The following is the perplexity of Chinese-LLaMA-Plus-7B on a test set:
Context size
512
1024
2048
3072
4096
5120
Perplexity
11.4
11.0
11.0
11.5
12.4
15.6
Note that even though Chinese-LLaMA-Plus-7B has been trained with input_length of 512, its context window size can be extend to 4096 without significantly increasing the perplexity
Users only need to add the following lines to the beginning of the python code:
Description
Update: We find that NTK method mentioned in this Reddit post outperforms Position Interpolation up to a context size of at least 6K. Thus, with replace the implementation of PI with NTK method.
In addition, we use an empirical formula to set $\alpha$ adaptively given the input size, so that we could avoid hyperparameter tuning, and the method can be applied to different context sizes.
Even though Chinese-LLaMA-Plus-7B has been trained with input_length of 512, its context size can be extend to 5K~6K without significantly increasing the perplexity
Users only need to add the following lines to the beginning of the python code:
We keep the old implementation below for others' reference.
implementation of Position Interpolation (deprecated)
Description
We implement the Position Interpolation (proposed in the paper EXTENDING CONTEXT WINDOW OF LARGE LAN- GUAGE MODELS VIA POSITION INTERPOLATION and in the blog) for using LLaMA with Transformers.
Note that even though Chinese-LLaMA-Plus-7B has been trained with input_length of 512, its context window size can be extend to 4096 without significantly increasing the perplexity
Users only need to add the following lines to the beginning of the python code:
If
seq_len<=2048
, the behavior is not changed; Ifseq_len>2048
, the Position Interpolation is performed and the context size is extend toseq_len
.