If I understand correctly, the weights is the new model parameters, and current_grad_weights is the old model parameters. However, in TorchServerOptimizer class where update_round_gradient is implemented, the first argument is last_model and the second is current_model:
What happened + What you expected to happen
I was reading codes, and came across this:
https://github.com/SymbioticLab/FedScale/blob/407efad987cfbaa4f8d6e5d4858a5ef8f868ff31/fedscale/cloud/internal/torch_model_adapter.py#L37
If I understand correctly, the
weights
is the new model parameters, andcurrent_grad_weights
is the old model parameters. However, inTorchServerOptimizer
class whereupdate_round_gradient
is implemented, the first argument islast_model
and the second iscurrent_model
:https://github.com/SymbioticLab/FedScale/blob/407efad987cfbaa4f8d6e5d4858a5ef8f868ff31/fedscale/cloud/aggregation/optimizers.py#L24
I think the input arguments in
TorchModelAdapter
is in reverse order.Versions / Dependencies
Commit #407efad
Reproduction script
N/A
Issue Severity
Medium: It is a significant difficulty but I can work around it.