We explore how iterative revising a chain of thoughts with the help ofinformation retrieval significantly improves large language models' reasoningand generation ability in long-horizon generation tasks, while hugelymitigating hallucination. In particular, the proposed method --retrieval-augmented thoughts (RAT) -- revises each thought step one by onewith retrieved information relevant to the task query, the current and the pastthought steps, after the initial zero-shot CoT is generated. Applying RAT toGPT-3.5, GPT-4, and CodeLLaMA-7b substantially improves their performances onvarious long-horizon generation tasks; on average of relatively increasingrating scores by 13.63% on code generation, 16.96% on mathematical reasoning,19.2% on creative writing, and 42.78% on embodied task planning. The demo pagecan be found at https://craftjarvis.github.io/RAT
URL
Affiliations
Abstract
Translation (by gpt-3.5-turbo)
Summary (by gpt-3.5-turbo)