Hi! Thanks for your inspiring work.
I have a few questions about the reported accuracy in your paper, because I am not familiar with the field of federated learning.
In the section 4.2 Accuracy Comparision, do all client participate in the training(Participation ratio=1.0)?
When under partial participation setting, each round we need to evaluate the current model, so which dataset will the evaluation be conducted on? Will the evaluation be conducted on the selected clients or all clients? If we adopt the first strategy(closer to the practical scenario), how do we get the reported top accuracy? Is the reported top accuracy is the average acc among all clients?
For all experiments including partial participation setting, we evaluate the global model on the global test dataset. There is no local test dataset in our experiments.
Hi! Thanks for your inspiring work. I have a few questions about the reported accuracy in your paper, because I am not familiar with the field of federated learning.