Closed Zerek-Kel closed 1 year ago
Thanks for your feedback. I fixed the SCAFFOLD codes. The collapse can be attributed to the amount of feeding data. In SCAFFOLD paper, the client feeds only one batch of data per local epoch. However, in my previous code, the client traverses the entire train set. I have checked the entire SCAFFOLD train process and can't find any problems anymore. Welcome to pull the lastest code. Here is the SCAFFOLD Learning curve over EMNIST with $Dir(1.0)$ partition: At least it doesn't collapse anymore. 😂
Thanks!
Thanks for your code! I want to know why the curve of SCAFFOLD on emnist behave like this?