johnrobinsn / blog_comments

0 stars 0 forks source link

posts/finetune_redpajama/ #3

Open utterances-bot opened 1 year ago

utterances-bot commented 1 year ago

Finetuning Redpajama (OpenLlama)

Finetuning Redpajama

https://www.storminthecastle.com/posts/finetune_redpajama/

raziurtotha commented 1 year ago

Really glad I came across your articles, John! I am just beginning my journey with Open LLMs. I have two questions. How could I use the finetuned model (after I train it with my own data) to make an interactive chatbot? And is there any evaluation method?

johnrobinsn commented 1 year ago

Glad you enjoyed it. As you can see from the notebook the model has been fine tuned to take in a prompt (or instruction) and respond so you could leverage that to have a back and forth dialogue. If you want to train the model to have more chatlike dialog where the prompts and the responses can refer further back into the context you could look at the open assistant dataset and finetune your model with an augmented version of that.

raziurtotha commented 1 year ago

@johnrobinsn , Thanks for your reply, John. Appreciate it.

hamelsmu commented 1 year ago

I just got a chance to read this and I really love this article. It was very helpful

pcuenca commented 1 year ago

Fantastic job, John, very clear and informative. Thanks a lot!

Sakil786 commented 12 months ago

@johnrobinsn , Thanks for writing such a great site; I really enjoy it. Is it possible to finetune openlm-research/open_llama_3b in a similar way?or Could we consider togethercomputer/RedPajama-INCITE-Base-3B-v1 as openlm-research/open_llama_3b? Thank you.

johnrobinsn commented 11 months ago

yes training should be very similar for the openlm-research/open_llama_3b. They are two different models trained by different teams both models were initially trained on the redpajama dataset. Both should work well.