Maybe give motivations about why the paper should be implemented as a baseline.
Proposes a data-free knowledge distillation approach to address heterogeneous FL, where the server learns a lightweight generator to ensemble user information in a data-free manner, which is then sent to the users to regulate local training using the learned knowledge as an inductive bias.
Achieves better generalization performance compared with the state-of-the-art.
Is there something else you want to add?
No response
Implementation
To implement this baseline, it is recommended to do the following items in that order:
Paper
Zhu, Zhuangdi & Hong, Junyuan & Zhou, Jiayu. (2021). Data-Free Knowledge Distillation for Heterogeneous Federated Learning.
Link
https://arxiv.org/abs/2105.10056
Maybe give motivations about why the paper should be implemented as a baseline.
Proposes a data-free knowledge distillation approach to address heterogeneous FL, where the server learns a lightweight generator to ensemble user information in a data-free manner, which is then sent to the users to regulate local training using the learned knowledge as an inductive bias.
Achieves better generalization performance compared with the state-of-the-art.
Is there something else you want to add?
No response
Implementation
To implement this baseline, it is recommended to do the following items in that order:
For first time contributors
first contribution
docPrepare - understand the scope
Verify your implementation
EXTENDED_README.md
that was created in your baseline directoryREADME.md
is ready to be run by someone that is no familiar with your code. Are all step-by-step instructions clear?README.md
and verify everything runs.