ATP-1010 / FederatedLLM

40 stars 5 forks source link

A question about distributing global lora modules #2

Open bihaizhang opened 1 month ago

bihaizhang commented 1 month ago

As stated in your paper, the server distributes the stacked global LoRA module to each client, but how each client convert this global module into the local module with a lower rank?

ATP-1010 commented 1 month ago

Thanks for reaching out. In FLoRA, clients will initialize new local LoRA modules in each communication round.

bihaizhang commented 1 month ago

Thanks for reaching out. In FLoRA, clients will initialize new local LoRA modules in each communication round.

Thanks for your reply. But in each new round, how does each client initialize new local LoRA modules, like intialization from random matrix?

ATP-1010 commented 1 month ago

Yes, randomly initialize new LoRA by LoraConfig,

huskydj1 commented 1 week ago

If local lora modules are initialized randomly in each epoch, how is FLORA expected to learn after multiple epochs? Isn't the global model only the aggregate of each epoch then?

ATP-1010 commented 1 week ago

Each round the clients will first merge the global LoRA to the local base model and then initialize the new local LoRA.

Elevenlyt commented 1 week ago

Hello, thank you very much for open-sourcing the project. I have some questions. 1. When the client finishes training, it saves pytorch_model.bin. When fedavg is used, single_weights loads this file. When full=false and stacking=true, how is the aggregation of lora reflected? 2. And how is the client merging the global lora implemented?

Elevenlyt commented 1 week ago

Hello, thank you very much for open-sourcing the project. I have some questions. 1. When the client finishes training, it saves pytorch_model.bin. When fedavg is used, single_weights loads this file. When full=false and stacking=true, how is the aggregation of lora reflected? 2. And how is the client merging the global lora implemented?

What I mean is that when aggregate, I only see weighted_single_weights, but no judgment statement like {if 'lora_A' in key: key = key.replace('lora_A', 'lora_B')}

ATP-1010 commented 2 days ago

We save the LoRA modules and re-load the peft model. Please check main.py.