lunan0320 / Federated-Learning-Knowledge-Distillation

repo in several methods FedAvg, FedMD, FedProto, FedProx, FedHKD.
18 stars 1 forks source link

May I ask the difference between this repo and Federated-Hyper-Knowledge-Distillation #3

Open dddkyi opened 2 months ago

dddkyi commented 2 months ago

Thank you for your valuable work. I don't know if I can ask u the difference between this repo and https://github.com/CityChan/Federated-Hyper-Knowledge-Distillation, seems this two repo has same contributor and similar file tree. thanks.

lunan0320 commented 2 months ago

Thank you for your question. I found some problems with the code in the original repository when I actually used it. You can refer to the issue I raised before, which was also recognized by the author. issue I made some modifications and improvements on this basis.

dddkyi commented 2 months ago

Thank you for your patient reply. Based on your advanced code, I have implement my paper experiment and cited your GitHub repo in my paper: https://arxiv.org/abs/2407.05276. Thanks for your help again. p.s. only when i run FEDHKD the acc is abnormal, which is shown in my paper that the acc in most time is about 0.1. may i ask u if u have similar problem when u run the FedHKD code? Thank you very much.

lunan0320 commented 2 months ago

Congratulations on your recent work, and thank you for citing our code in your paper! I wanted to mention that I don’t recall if I fully calibrated the effect of FedHKD. I noticed that the accuracy reported in your paper is around 0.1, which seems unusual. I apologize for the inconvenience, but I recently shifted my research focus and, unfortunately, haven't had the opportunity to update that part of the code. It might be helpful to first check if there could be any similar issues.