Performing inference on large volumes of samples with large language models(LLMs) can be computationally and financially costly in industry and real-worlduse. We propose batch prompting, a simple yet effective prompting approach thatenables the LLM to run inference in batches, instead of one sample at a time.Our method reduces both token and time costs while retaining downstreamperformance. We theoretically demonstrate that under a few-shot in-contextlearning setting, the inference costs decrease almost inverse linearly with thenumber of samples in each batch. We extensively validate the effectiveness ofbatch prompting on ten datasets across commonsense QA, arithmetic reasoning,and NLI/NLU: batch prompting significantly~(up to 5x with six samples in batch)reduces the LLM (Codex) inference token and time costs while achieving betteror comparable performance. For state-of-the-art Chat-based LLMs, e.g., GPT-3.5and GPT-4, we show the benefits of batch prompting also hold. Further analysisshows that the number of samples in each batch and the complexity of tasksaffect its performance. Moreover, batch prompting can be applied acrossdifferent reasoning methods using LLMs. Our code can be found at the sitehttps://github.com/xlang-ai/batch-prompting.
URL
Affiliations
Abstract
Translation (by gpt-3.5-turbo)
Summary (by gpt-3.5-turbo)