TsingZ0 / FedALA

AAAI 2023 accepted paper, FedALA: Adaptive Local Aggregation for Personalized Federated Learning
Apache License 2.0
113 stars 18 forks source link

Reproducing Table 2. #2

Closed cj-mclaughlin closed 1 year ago

cj-mclaughlin commented 1 year ago

Hello,

First of all thank you for providing this work.

I am looking to replicate Table 2 from the publication. However, for many of these methods, there are many hyper-parameters to tune and it is difficult to tell if the drop in performance is due to the randomness of the dataset partition or due to poor hyperparameter selection.

Would it be possible for you to share either:

  1. The full set of hyperparameters used to train models used in table 2
  2. Trained model checkpoints, with the corresponding dataset splits (e.g. just as you have done here for MNIST, but also for CIFAR10/CIFAR100)

Thank you for your time.

TsingZ0 commented 1 year ago

Sorry for the late reply. In recent months, I am busy writing a benchmark paper, which contains more FL/pFL methods. More details including hyperparameters will be included in that paper.

As mentioned in README, you can generate the dataset splits using my PFL simulation platform https://github.com/TsingZ0/PFL-Non-IID with the corresponding settings mentioned in our paper.

cj-mclaughlin commented 1 year ago

No worries, I was able to extract optimal hyperparameters for at least some methods using Bayesian Optimization.

I look forward to seeing your upcoming benchmark paper. Feel free to reach out if you are interested in potential collaboration in the PFL space.

TsingZ0 commented 1 year ago

Thank you for the invitation. I am also interested in exploring potential collaboration in the PFL space. What kind of issues do you focus on in PFL?

TsingZ0 commented 1 year ago

We have provided the hyperparameter settings in the extended version, please check https://arxiv.org/pdf/2212.01197v4.pdf.