AshwinRJ / Federated-Learning-PyTorch

Implementation of Communication-Efficient Learning of Deep Networks from Decentralized Data
MIT License
1.26k stars 439 forks source link

Parallel computing support #43

Open JackingChen opened 1 year ago

JackingChen commented 1 year ago

Hi thanks for providing this wonderful repository, but I'm wondering if there will be support for parallelization of client training in each round

specifically, making the local update in federated_main.py to be executed by parallel processes

for idx in idxs_users:
            local_model = LocalUpdate(args=args, dataset=train_dataset,
                                      idxs=user_groups[idx], logger=logger)
            w, loss = local_model.update_weights(
                model=copy.deepcopy(global_model), global_round=epoch)
            local_weights.append(copy.deepcopy(w))
            local_losses.append(copy.deepcopy(loss))

or

are there suggestions for start working on this approach?

saigontrade88 commented 1 year ago

parallelization of client training in each round

I also have the same question. Thanks.

Xiaoni-61 commented 11 months ago

It's not possible now, but you can use threading to do it