Chain-of-Thought Prompting: Helping LLMs Learn by Example
Chain-of-Thought (CoT) prompting is a technique that encourages large language models (LLMs) to break down complex thoughts into intermediate steps by providing a few demonstrations. This approach has been shown to improve LLMs' performance on arithmetic, commonsense, and symbolic reasoning tasks, which are resistant to the improvements granted by scaling laws in other areas. CoT prompting works by spurring reasoning in LLMs through decomposition, allowing them to tackle complicated math or logic questions by breaking down larger problems into a series of intermediate steps. The method has inspired even more capable "Tree-of-Thought" and "Graph-of-Thought" prompting approaches.
Company
Deepgram
Date published
Oct. 2, 2023
Author(s)
Brad Nikkel
Word count
2615
Hacker News points
None found.
Language
English