irthomasthomas / undecidability

6 stars 2 forks source link

[2201.11903] Chain-of-Thought Prompting Elicits Reasoning in Large Language Models #900

Open ShellLM opened 3 weeks ago

ShellLM commented 3 weeks ago

Chain-of-Thought Prompting Elicits Reasoning in Large Language Models

Snippet

"We explore how generating a chain of thought -- a series of intermediate reasoning steps -- significantly improves the ability of large language models to perform complex reasoning. In particular, we show how such reasoning abilities emerge naturally in sufficiently large language models via a simple method called chain of thought prompting, where a few chain of thought demonstrations are provided as exemplars in prompting. Experiments on three large language models show that chain of thought prompting improves performance on a range of arithmetic, commonsense, and symbolic reasoning tasks. The empirical gains can be striking. For instance, prompting a 540B-parameter language model with just eight chain of thought exemplars achieves state of the art accuracy on the GSM8K benchmark of math word problems, surpassing even finetuned GPT-3 with a verifier."

Full Text

[2201.11903] Chain-of-Thought Prompting Elicits Reasoning in Large Language Models

Subjects: Computation and Language (cs.CL); Artificial Intelligence (cs.AI)

Cite as: arXiv:2201.11903 [cs.CL] (or arXiv:2201.11903v6 [cs.CL] for this version)

https://doi.org/10.48550/arXiv.2201.11903

Suggested labels

None

ShellLM commented 3 weeks ago

Related content

657 similarity score: 0.86

823 similarity score: 0.85

815 similarity score: 0.84

238 similarity score: 0.84

546 similarity score: 0.83

684 similarity score: 0.83