Closed ijyliu closed 9 months ago
although gains on task performance are slipping, chain-of-thought prompting (and the zero-shot varieties in particular) are still an generally cheap (in time and resources) and effective way to get a good amount of performance improvement out of large language models
few-shot prompting has its applications and advantages in getting LLM responses to conform to desired preferences and formatting
first clean up analysis, then clean up conclusion, then add to abstract