LongLLaMA is a large language model capable of handling long contexts. It is based on OpenLLaMA and fine-tuned with the Focused Transformer (FoT) method.
Apache License 2.0
1.45k
stars
85
forks
source link
It's questionable whether the context window has truly been expanded? #24
It feels like it's not really about expanding the context window, but rather enhancing it through the key-value pairs stored during training as external knowledge. This means that once the training is complete, the memory will no longer change. It's akin to having an external knowledge base for the training data domain. If I fine-tune on financial data and then try to infer in the technology sector, it's completely ineffective. The context window hasn't been expanded, and the dependency on longer texts is still not captured. Moreover, the external memory becomes useless in this scenario。
I am very much looking forward to getting an answer. @CStanKonrad
It feels like it's not really about expanding the context window, but rather enhancing it through the key-value pairs stored during training as external knowledge. This means that once the training is complete, the memory will no longer change. It's akin to having an external knowledge base for the training data domain. If I fine-tune on financial data and then try to infer in the technology sector, it's completely ineffective. The context window hasn't been expanded, and the dependency on longer texts is still not captured. Moreover, the external memory becomes useless in this scenario。 I am very much looking forward to getting an answer. @CStanKonrad