alonj / Same-Task-More-Tokens

The code for the paper: "Same Task, More Tokens: the Impact of Input Length on the Reasoning Performance of Large Language Models"
https://arxiv.org/abs/2402.14848
Apache License 2.0
45 stars 2 forks source link

Same Task, More tokens

The Impact of Input Length on the Reasoning Performance of Large Language Models

Mosh Levy[*,1], Alon Jacoby[*,1], Yoav Goldberg[1,2]

**Accepted to ACL 2024 main conference** [Please see full details in our pre-print on arxiv](https://arxiv.org/abs/2402.14848)
Paper page on HF Dataset on HF

What is this all about?

We explore the impact of extending input lengths on the capabilities of Large Language Models (LLMs).

Despite LLMs advancements in recent times, their performance consistency across different input lengths is not well understood.

Here, we aim to change that by isolating the effect of input length and studying when, and how models fail to respond correctly to QA reasoning tasks.

How to investigate the impact of length

We investigate this aspect by introducing a novel QA reasoning framework, our FLenQA Dataset, specifically designed to assess the impact of input length. We isolate the effect of input length using multiple versions of the same sample, each being extended with padding of different lengths, types and locations.

What we found

Our findings show a notable degradation in LLMs' reasoning performance at much shorter input lengths than their technical maximum. We show that the degradation trend appears in every version of our dataset, although at different intensities.

Additionally, our study reveals that the traditional metric of next word prediction correlates negatively with performance of LLMs' on our reasoning dataset.

We also identified failure modes that can serve as useful guides for future research, potentially informing strategies to address the limitations observed in LLMs.

Analysis notebook

The notebook should help you analyse and evaluate models of your choice. We demonstrate all the necessary steps on GPT-3.5 Turbo (Version 1106)

It shows how to:

*: Authors contributed equally to this work.
1: Bar-Ilan University
2: Allen Institute for AI