TsingZ0 / FedALA

AAAI 2023 accepted paper, FedALA: Adaptive Local Aggregation for Personalized Federated Learning
Apache License 2.0
113 stars 18 forks source link
adaptation federated-learning non-iid personalization pytorch

Introduction

This is the implementation of our paper FedALA: Adaptive Local Aggregation for Personalized Federated Learning (accepted by AAAI 2023). An extended version (derivation of Equation (6), hyperparameter settings, etc.) can be found at https://arxiv.org/pdf/2212.01197v4.pdf.

Citation

@inproceedings{zhang2023fedala,
  title={Fedala: Adaptive local aggregation for personalized federated learning},
  author={Zhang, Jianqing and Hua, Yang and Wang, Hao and Song, Tao and Xue, Zhengui and Ma, Ruhui and Guan, Haibing},
  booktitle={Proceedings of the AAAI Conference on Artificial Intelligence},
  volume={37},
  number={9},
  pages={11237--11244},
  year={2023}
}

Datasets and Environments

Here, we only upload the mnist dataset in the default heterogeneous setting with Dir(0.1) for example. You can generate other datasets and environment settings following PFLlib.

System

Adaptive Local Aggregation (ALA) module

./system/utils/ALA.py is the implementation of the ALA module, which corresponds to the pseudocode from line 6 to line 16 in Algorithm 1 in our paper. You can easily apply the ALA module to other federated learning (FL) methods by importing it as a Python module.

Details

Illustrations

How to use

Training and Evaluation

All codes corresponding to FedALA are stored in ./system. Just run the following commands.

cd ./system
sh run_me.sh

Note: Due to the dynamics of the floating-point calculation accuracy of different GPUs, you may need to set a suitable threshold (we set it to 0.01 in our paper by default) for the ALA module to control its convergence level in the start phase. A small threshold may cause your system to get stuck in the first iteration.