Reasoning presents a significant and challenging issue for Large LanguageModels (LLMs). The predominant focus of research has revolved around developingdiverse prompting strategies to guide and structure the reasoning processes ofLLMs. However, these approaches based on decoder-only causal language modelsoften operate the input question in a single forward pass, potentially missingthe rich, back-and-forth interactions inherent in human reasoning. Scantattention has been paid to a critical dimension, i.e., the input questionitself embedded within the prompts. In response, we introduce a deceptivelysimple yet highly effective prompting strategy, termed question "re-reading".Drawing inspiration from human learning and problem-solving, re-reading entailsrevisiting the question information embedded within input prompts. Thisapproach aligns seamlessly with the cognitive principle of reinforcement,enabling LLMs to extract deeper insights, identify intricate patterns,establish more nuanced connections, and ultimately enhance their reasoningcapabilities across various tasks. Experiments conducted on a series ofreasoning benchmarks serve to underscore the effectiveness and generality ofour method. Moreover, our findings demonstrate that our approach seamlesslyintegrates with various language models, though-eliciting prompting methods,and ensemble techniques, further underscoring its versatility and compatibilityin the realm of LLMs.
URL
Affiliations
Abstract
Translation (by gpt-3.5-turbo)
Summary (by gpt-3.5-turbo)