Open bene-ges opened 4 months ago
Hi, because the main contributor of fine-tuning llama was approaching graduation, the code for reproducing federated finetuning of llama-7b is still on its way. But we will make it open-sourced at our earliest convenience. Please stay tuned.
For your second question, yes. We refer to lit-llama and plug our forward gradient computation function to replace its bp-based gradient computation.
Hope it can help you.
Hi, do you have code for reproducing any of your experiments with federated finetuning of llama-7b (from the paper)? Or maybe some of the existing examples can be adapted to do it?