Open JackingChen opened 1 year ago
Hi thanks for providing this wonderful repository, but I'm wondering if there will be support for parallelization of client training in each round
specifically, making the local update in federated_main.py to be executed by parallel processes
for idx in idxs_users: local_model = LocalUpdate(args=args, dataset=train_dataset, idxs=user_groups[idx], logger=logger) w, loss = local_model.update_weights( model=copy.deepcopy(global_model), global_round=epoch) local_weights.append(copy.deepcopy(w)) local_losses.append(copy.deepcopy(loss))
or
are there suggestions for start working on this approach?
parallelization of client training in each round
I also have the same question. Thanks.
It's not possible now, but you can use threading to do it
Hi thanks for providing this wonderful repository, but I'm wondering if there will be support for parallelization of client training in each round
specifically, making the local update in federated_main.py to be executed by parallel processes
or
are there suggestions for start working on this approach?