rajatsharma369007 / peft-fine-tuning-genAI

Lightweight fine-tuning is one of the most important techniques for adapting foundation models, because it allows you to modify foundation models for your needs without needing substantial computational resources.
0 stars 0 forks source link

Feedback: training is not required for the foundational model #2

Closed rajatsharma369007 closed 3 months ago

rajatsharma369007 commented 3 months ago

My approach of training a pretrained model would be suitable e.g. for a conventional model like a CNN in a classification task, transfer learning, etc., however foundational models have exponentially more parameters, and those parameters are trained on millions of datapoints, some on all available text in the Internet. Retraining those parameters, because you have e.g. 1000 labeled sentences in a training dataset is not reasonable and you run into risk of as mentioned catastrophic forgetting.

rajatsharma369007 commented 3 months ago

The issue is resolved in this commit