Hi, thanks for your work!
You mentioned that "we used better strategies to train Phi-3-Mini-based and Llama-3-8B-based Bunny", so I would like to ask, what kind of strategy did you use when training Llama-3-8B-based Bunny? And when do you plan to make the finetune_lora.sh of Llama-3-8B-based Bunny public?
Hi, thanks for your work! You mentioned that "we used better strategies to train Phi-3-Mini-based and Llama-3-8B-based Bunny", so I would like to ask, what kind of strategy did you use when training Llama-3-8B-based Bunny? And when do you plan to make the finetune_lora.sh of Llama-3-8B-based Bunny public?