UbiquitousLearning / FwdLLM

21 stars 2 forks source link

code for reproducing experiment with llama-7b from the paper #1

Open bene-ges opened 4 months ago

bene-ges commented 4 months ago

Hi, do you have code for reproducing any of your experiments with federated finetuning of llama-7b (from the paper)? Or maybe some of the existing examples can be adapted to do it?

caidongqi commented 3 months ago

Hi, because the main contributor of fine-tuning llama was approaching graduation, the code for reproducing federated finetuning of llama-7b is still on its way. But we will make it open-sourced at our earliest convenience. Please stay tuned.

For your second question, yes. We refer to lit-llama and plug our forward gradient computation function to replace its bp-based gradient computation.

Hope it can help you.