pointnetwork / point-alpaca

406 stars 29 forks source link

Question about fine-tuning LLaMa #2

Open LeoArtaza opened 1 year ago

LeoArtaza commented 1 year ago

I was wondering if we could fine-tune LLaMa with our own training data and then apply this to transform it into Alpaca and it would work, or would it be better to fine-tune Alpaca directly? Is it possible at all?

sergevar commented 1 year ago

The dataset that the Stanford team generated has 52K entries: https://github.com/tatsu-lab/stanford_alpaca

Alpaca gives you a much better performance than raw LLaMA, so unless you have a very good dataset, it would make sense to further finetune Alpaca on your data.

Meaning, if you have just a few json files, it definitely doesn't make sense to tune LLaMA on it, it will probably be worse than the base model.