Open caoshuai03 opened 1 month ago
Thanks for your interest in LMFlow and LISA!
Regarding the first question, we conducted the fine-tuning based on the same seed and the same base model, so their initial weight should be the same before fine-tuning.
As for the second question, I think the weighted averaging methods are certainly different from the normal training process. But since it is less frequently adopted in practice of fine-tuning LLMs, we didn't conduct experiments on that. To draw insights from weighted averaging methods, we think at least two parts of experiments are needed if anyone is interested in this aspect:
Hope this information can be helpful 😄
In the article, only the comparison of the average weight paradigm of each layer during lora fine-tuning is given.