Open LeonardoSanBenitez opened 1 month ago
If I understood correctly, the model TinyAgent-1.1B was obtained by finetuning the TinyLlama/TinyLlama-1.1B-Chat-v1.0 on the TinyAgent-dataset, correct?
This repo seems to only contain the code to use the resulting finetuned model, but not the code to perform the LoRa finetuning nor the code to generate the synthetic data.
Would it be possible to provide them?
I also wonder if it can be provided
I'm also curious about how to generate synthetic data, as I believe that's a key focus of most efforts.
If I understood correctly, the model TinyAgent-1.1B was obtained by finetuning the TinyLlama/TinyLlama-1.1B-Chat-v1.0 on the TinyAgent-dataset, correct?
This repo seems to only contain the code to use the resulting finetuned model, but not the code to perform the LoRa finetuning nor the code to generate the synthetic data.
Would it be possible to provide them?