adap / flower

Flower: A Friendly Federated AI Framework
https://flower.ai
Apache License 2.0
5.03k stars 863 forks source link

Error when using FedAdagrad #764

Open WeiJongYang opened 3 years ago

WeiJongYang commented 3 years ago

Hi I tried to use other strategy rather than fedavg. I simply change fl.server.strategy.FedAvg to fl.server.strategy.FedAdagrad in the sample code (advanced_tensorflow/server.py). And also change optimizer to SGD on clients. Unfortunately I got an error:

File "server.py", line 238, in main() File "server.py", line 130, in main strategy = fl.server.strategy.FedAdagrad( File "/home/user/anaconda3/envs/flwr/lib/python3.8/site-packages/flwr/server/strategy/fedadagrad.py", line 95, in init super().init( File "/home/user/anaconda3/envs/flwr/lib/python3.8/site-packages/flwr/server/strategy/fedopt.py", line 93, in init self.current_weights = parameters_to_weights(initial_parameters) File "/home/user/anaconda3/envs/flwr/lib/python3.8/site-packages/flwr/common/parameter.py", line 34, in parameters_to_weights return [bytes_to_ndarray(tensor) for tensor in parameters.tensors]

I trace back to fedopt.py in line 93. The original code is: self.current_weights = parameters_to_weights(initial_parameters)

Sholud it be: self.current_weights = initial_parameters ??

I think there is a type error here. But I'm not sure I used the function in the right way or misunderstanding the idea of FedAdagrad. Any suggestion about this problem?

franzschellman commented 3 years ago

Given that your weights object from model.get_weights() is a list, you have to convert it into either a tf.SparseTensor or a tf.ragged.constant object. The initial_parameters argument requires iterable Tensors, so try converting your existing initial weights. Also read the original paper to understand how they processed their core function and also read into example code if it's available. This doesn't answer your question 100%, and I did reproduce the problem, but I have not been able to convert the weights into ragged tensors myself.

Sylarair commented 3 years ago

I came across the same problem. Did you have any solution to it?

nooralahzadeh commented 3 years ago

It seems you should use initial_parameters=weights_to_parameters(initial_parameters), before passing it to the FedAdagrad, because the initial_parameters is a List[np.ndarray] and in fedopt.py you have self.current_weights = parameters_to_weights(initial_parameters) which need the initial_parameters to be in parameters format!

laminair commented 1 year ago

Even though this issue is older, here's how I recently did it. Basically, you need to initialize strategies inheriting from FedOpt with the model's initial state (on the server). The docs already suggest how it works (assuming you instantiated a model first): initial_parameters = weights_to_parameters([val.cpu().numpy() for _, val in self.model.state_dict().items()])

You need to pass weights that match the model architecture as otherwise the zip() function in aggregate_fit() does not work properly. For the list part, see the docs on how a client fetches model weights in the get_parameters() function.

Also, don't forget to pass an evaluate_fn() for the FedOpt strategies to work properly.

danieljanes commented 1 year ago

@laminair @nooralahzadeh @Sylarair @franzschellman @WeiJongYang do you have an idea how we could make it easier to use these strategies?

laminair commented 1 year ago

@danieljanes Once I had figured out what I posted above, it's straightforward to use. I also don't think there is a good way to simplify the initialization without giving up on flexibility. Maybe you could include an example of how to use FedOpt strategies to help people get started. The "Strategies" docs do not talk about how to use FedOpt at all (except for the API section). A brief description including a note that weight initialization is required (as opposed to being optional with FedAvg) could help users get FedOpt set up more quickly. I'm happy to create a pull request for both, (pytorch) example and docs.