KarhouTam / FL-bench

Benchmark of federated learning. Dedicated to the community. 🤗
GNU General Public License v3.0
503 stars 82 forks source link

are running_var, running_mean, num_batches_tracked keys trainble?? #1

Closed djskwh closed 1 year ago

djskwh commented 1 year ago

Hi, i recently found your wonderful FL-bench repository. i have question about your code structure.

i'm using python 3.10 and torch 1.13.1 version.

in FL-bench/src/client/fedavg.py code,

line 73 if not param.requires_grad

generates running_mean, running_var, num_batches_tracked keys.

for what i know, fedavg only updates weight, bias. and keep the batchnorm var(running_mean, running_var, num_batches_tracked keys) same.

so, i'm guessing your machine does not generate running_mean, running_var, num_batches_tracked keys.

is this because of my library version mismatch of your version?

KarhouTam commented 1 year ago

Hi, djskwh. Thanks for your attention first. Let me answer your title question, that running_mean, running_var and num_batches_tracked are not trainable. In my machine, those keys are generated also. About why I save those buffer params for each client, is I think those params are harmful to training if they are global (Just my opinion). So it's okay to get rid of those from personal_params_name if you want, and I think the final results won't change too much.

djskwh commented 1 year ago

Thanks for the answer karhouTam!!!!!!!!.