Open XinrunXu opened 1 week ago
Thank you very much for your interest in our work!
Regarding Finetuning Code: For LLMs (qwen2, chatglm4, and llama3.1) and qwen2-vl, we used full-scale finetuning via llama-factory for training. For cogvlm and llama3.2-vision, we conducted full-scale finetuning using ms-swift.
All training was performed with a learning rate of 1e-5, training for 3 epochs, and a batch size of 128.
Regarding Data Release: We are currently undergoing an ethics review for the dataset and plan to release the Android Instruction dataset (726 traces, 6208 steps) next week.
Great work on ANDROIDLAB! I'm interested in reproducing the finetuning results.
Could you please share the code and the subset of the Android Instruction dataset (726 traces, 6208 steps) used for finetuning on the benchmark tasks?
This would be very helpful for the community.