Open turian opened 1 year ago
@SeanNaren or @rohitgr7, mind having a look? :otter:
Hi @turian, do you mean Can you demonstrate how to fine-tune a pre-trained model on new data?
Because I think unlabeled data
mean the data used for pre-training (i.e without labels, performing the task of MLM, or related). Hi @Borda, I think, I would like to do an example/documentation (if it doesn't exist) for fine-tuning the model using lightning transformer.
Hi @Borda, I think, I would like to do an example/documentation (if it doesn't exist) for fine-tuning the model using lightning transformer.
That would be great!
Thanks for the heads up. Hi, @turian, can you provide me the instance where you would like to see an example or the domain you are talking about? Because, the usage of the library (on different task) are given in the readme section.
@turian @uakarsh how are you doing? :otter:
Hi @Borda, I am doing good. I was not able to make much of progress, since I am not sure, what is the exact thing, which needs to be solved. Because, if fine-tuning is concerned, I guess there are examples on readme and docs of lightning transformers.
🚀 Feature
Documentation or example showing how to fine-tune a pretrained model on unlabeled data.
Motivation
It's great to fine-tune your pretrained model on untrained data, so that---if you have precious few labels in the target domain---you still have adapted to that domain using untrained data.
Pitch
We have these super huge foundational models, but for niche domains without larges it's great to fine tune. Examples:
Alternatives
Hack around, maybe use hugging face. IDK?