Closed TapendraBaduwal closed 1 year ago
..
@TapendraBaduwal you should probably wait till the models fully trained and then ask for this, but there was a mention sft in one of the closed issues
Our model can largely be plugged and played in repos that support llama 2 (including BitsandBytes and SFT repos like FastChat. For your case, you need to find a training script that supports databricks-dolly-15k.jsonl dataset format and change the model name to our released checkpoint. Just make sure you have the latest version of HF transformers to support MQA. We are working on fine-tuning our model as well. Will be releasing something, probably next week.
@jzhang38 For fine-tune i am using Parameter-Efficient Fine-Tuning (PEFT) . PEFT supports the QLoRa method to fine-tune a small fraction of the LLM parameters with 4-bit quantization. By merge the adapter weights. It is the right way to fine-tune this tiny model ?
How to train model with databricks-dolly-15k.jsonl dataset format.
Can we Finetuning using BitsandBytes and SFT ?