Closed HuBocheng closed 4 months ago
I have figured out how to perform the testing. I just need to use script/merge_lora_weights.py to merge the unmerged weights. It seems I missed the information in evaluate.md. I apologize for wasting your time. 🫣
I have modified the LazySupervisedDataset and fine-tuned the model using a new dataset. The final output of the training appears to be quite normal:
Subsequently, I obtained the checkpoint data at Bunny/checkpoints-phi-2/bunny-lora-phi-2:
I would like to conduct some benchmark tests on Bunny using the fine-tuned model. I have been using scripts and code such as script/eval/full/mmbench.sh and bunny/eval/model_vqa_mmbench.py variants. However, the checkpoint data I obtained contains adapter_model.safetensors, which does not include the entire model parameters.
Could you please advise on how to save the entire model after training, similar to BAAI/Bunny-v1_0-3B · Hugging Face, so that I can further evaluate the fine-tuned model's performance on various benchmarks?
Thank you very much for your assistance!