Aidenzich / road-to-master

A repo to store our research footprint on AI
MIT License
17 stars 5 forks source link

Fine-tuning vs In-context learning vs Rag #55

Open Aidenzich opened 1 month ago

Aidenzich commented 1 month ago
Approach Use When Don't Use When Examples
Fine-Tuning - You have labeled data for the task
- High and consistent performance is needed
- Computational resources for training are available
- You want efficient inference
- Limited data for the task
- Flexibility to perform various tasks is needed
- Rapid prototyping/experimentation is required
- Computational resources are limited
- Simulating a person's unique tone and speaking style
In-Context Learning - Limited labeled data for the task
- Flexibility to perform various tasks is needed
- Rapid prototyping/experimentation is required
- Computational resources are limited
- High and consistent performance is needed
- Computational resources for training are available
- Answering general queries on various topics
RAG (Retrieval Augmented Generation) - Up-to-date or domain-specific knowledge is needed
- Referencing authoritative/proprietary sources is required
- Computational resources for fine-tuning are limited
- Flexibility to rapidly update knowledge is needed
- Mitigating hallucinations/fabrications is important
- Sufficient data and resources for fine-tuning
- Fundamentally different behavior/language needs to be learned
- External knowledge is not easily separable
- Generating responses based on latest research papers