jzhang38 / TinyLlama

The TinyLlama project is an open endeavor to pretrain a 1.1B Llama model on 3 trillion tokens.
Apache License 2.0
7.96k stars 469 forks source link

How to train model with databricks-dolly-15k.jsonl dataset format. #13

Closed TapendraBaduwal closed 1 year ago

TapendraBaduwal commented 1 year ago

How to train model with databricks-dolly-15k.jsonl dataset format.

Can we Finetuning using BitsandBytes and SFT ?

abrdgrt commented 1 year ago

..

VatsaDev commented 1 year ago

@TapendraBaduwal you should probably wait till the models fully trained and then ask for this, but there was a mention sft in one of the closed issues

jzhang38 commented 1 year ago

Our model can largely be plugged and played in repos that support llama 2 (including BitsandBytes and SFT repos like FastChat. For your case, you need to find a training script that supports databricks-dolly-15k.jsonl dataset format and change the model name to our released checkpoint. Just make sure you have the latest version of HF transformers to support MQA. We are working on fine-tuning our model as well. Will be releasing something, probably next week.

TapendraBaduwal commented 1 year ago

@jzhang38 For fine-tune i am using Parameter-Efficient Fine-Tuning (PEFT) . PEFT supports the QLoRa method to fine-tune a small fraction of the LLM parameters with 4-bit quantization. By merge the adapter weights. It is the right way to fine-tune this tiny model ?