After semi-supervised pretraining, can we do light-weighted fine-tunning or few-shot learning instead of classification?
What is the expected behavior?
Instead of fine-tuning on decent amount of labeled data. Is it possible to do some light-weight fine-tuning (e.g., fine-tunning on less than 100 labeled data) or doing few-shot learning instead of classification?
What is motivation or use case for adding/changing the behavior?
Only have limited labeled data to fine-tune model.
How should this be implemented in your opinion?
For few-shot learning, maybe change the loss function.
Feature request
After semi-supervised pretraining, can we do light-weighted fine-tunning or few-shot learning instead of classification?
What is the expected behavior? Instead of fine-tuning on decent amount of labeled data. Is it possible to do some light-weight fine-tuning (e.g., fine-tunning on less than 100 labeled data) or doing few-shot learning instead of classification?
What is motivation or use case for adding/changing the behavior? Only have limited labeled data to fine-tune model.
How should this be implemented in your opinion? For few-shot learning, maybe change the loss function.
Are you willing to work on this yourself? yes