dheeraj-hj / Federated-Learning

0 stars 0 forks source link

All clients need not have same number of datasets #2

Open Indithem opened 2 months ago

Indithem commented 2 months ago

In that paper, for fed-avg algo, they used weights for each client, corresponding to the difference in training sets for each client in each round....

Simulate this also...

Indithem commented 2 months ago

also, in our approach All clients send the updates model parameters (w) instead of gradient changes ($\Delta$ w)

We need to find weighted average of changes, and add this change to global parameters $$W{global} \leftarrow W{global} + \sum \dfrac{n_k}{n} \Delta w_k$$

Indithem commented 2 months ago

ok, it seems doing as previously said, and as how we did are equivalent,

but only when all $n_k$ are 1. lets verify how same they will be in other cases

(approach till now $W{global} \leftarrow \dfrac{1}{n} \sum W{global} + \Delta w_k$)