pliang279 / LG-FedAvg

[NeurIPS 2019 FL workshop] Federated Learning with Local and Global Representations
MIT License
230 stars 54 forks source link

CNNCifar.weight_keys #8

Open KiteFlyKid opened 3 years ago

KiteFlyKid commented 3 years ago

Nice Work! But in Net.py CNNCifar.weight_keys, why are fc layers ahead of conv?

       self.weight_keys = [['fc1.weight', 'fc1.bias'],
                            ['fc2.weight', 'fc2.bias'],
                            ['fc3.weight', 'fc3.bias'],
                            ['conv2.weight', 'conv2.bias'],
                            ['conv1.weight', 'conv1.bias'],
                            ]

If nothing wrong with my understanding, I suppose it should be like this because you do conv first then feedfoward

        self.weight_keys = [['conv1.weight', 'conv1.bias'],
                            ['conv2.weight', 'conv2.bias'],
                            ['fc1.weight', 'fc1.bias'],
                            ['fc2.weight', 'fc2.bias'],
                            ['fc3.weight', 'fc3.bias'],
                            ]
mengcz13 commented 3 years ago

I think this is for keeping conv layers global: the logic of codes is to select the last N layers as global layers.

CarlBye commented 3 years ago

I think this is for keeping conv layers global: the logic of codes is to select the last N layers as global layers.

I think local part extract high level, compact features like feature extractor and global part acts like classifier. If the codes select conv layers as global part, does it meet the original meaning of the paper ? Or am i misunderstand the paper?

AlphaPav commented 2 years ago

I have the same question here. Is this problem solved?

KiteFlyKid commented 2 years ago

You can try to reproduce their experiments but as I tried to modify the code the results are different from what they put in the paper (I did this last year so I may forget the details).