SymbioticLab / FedScale

FedScale is a scalable and extensible open-source federated learning (FL) platform.
https://fedscale.ai
Apache License 2.0
386 stars 120 forks source link

[TorchModelAdapter] Input Argument Issue #231

Closed EricDinging closed 1 year ago

EricDinging commented 1 year ago

What happened + What you expected to happen

I was reading codes, and came across this:

https://github.com/SymbioticLab/FedScale/blob/407efad987cfbaa4f8d6e5d4858a5ef8f868ff31/fedscale/cloud/internal/torch_model_adapter.py#L37

If I understand correctly, the weights is the new model parameters, and current_grad_weights is the old model parameters. However, in TorchServerOptimizer class where update_round_gradient is implemented, the first argument is last_model and the second is current_model:

https://github.com/SymbioticLab/FedScale/blob/407efad987cfbaa4f8d6e5d4858a5ef8f868ff31/fedscale/cloud/aggregation/optimizers.py#L24

I think the input arguments in TorchModelAdapter is in reverse order.

Versions / Dependencies

Commit #407efad

Reproduction script

N/A

Issue Severity

Medium: It is a significant difficulty but I can work around it.

AmberLJC commented 1 year ago

The bug makes sense to me. WDYT @fanlai0990

fanlai0990 commented 1 year ago

Yes. It's indeed a bug. @EricDinging Can you please submit a PR to fix it? Please remember to test it. Thanks.

EricDinging commented 1 year ago

Will do, thanks.