Closed AnasKhann22 closed 1 year ago
Hi! In simulated settings, e.g., when you manually divide the MNIST dataset, I would approach it as follows. I would divide the dataset into k folds at the beginning. I'd divide the k-1 folds of data between clients (also further divide it for the train and test for each client). I'd train such a system k times using a different fold for testing. Then you could try, e.g., different strategies or training parameters/model architectures. Average the result from all the different folds for each setting. Then you can compare the results, and it should better resemble the generalization abilities of a system for real-life applications. I hope this answers your question.
@AnasKhann22, as there was no further comment on this for three weeks and @adam-narozniak has answered the question, I am closing the ticket. @AnasKhann22 feel free to open it again if any other questions should arise.
What is your question?
How can we use k-fold cross validation in Flower? Do we have to apply it on client end or server or both?