hoangquochung1110 / public-notes

0 stars 0 forks source link

Maximize LLM performance #19

Open hoangquochung1110 opened 2 months ago

hoangquochung1110 commented 2 months ago
  1. Always start from Prompt Engineering
hoangquochung1110 commented 2 months ago

Single-turn Prompting

Multi-turn (conversational) Prompting

hoangquochung1110 commented 2 months ago

Chain of Thought (CoT) Prompting

hoangquochung1110 commented 2 months ago

Structure your Output

When working with multiple models, you can not control structure of outputs -> we need an universal structured output for models to rely on