LabeliaLabs / distributed-learning-contributivity

Simulate collaborative ML scenarios, experiment multi-partner learning approaches and measure respective contributions of different datasets to model performance.
https://www.labelia.org
Apache License 2.0
56 stars 12 forks source link

The MNIST model seems to deep/complexe #324

Open arthurPignet opened 3 years ago

arthurPignet commented 3 years ago

The MNIST nn have 1,199,882 parameters, for an accuracy of 0.9893 +/- 0.01 (keras fit, epochs=12, batch_size=168)

To compare with, the cifar model have 1,250,858 parameters, and reach 0.7888 +/- 0.03 accuracy in epochs=20, batch_size=140

I am not sure but I think we can either optimize our optimization parameters for MNIST and reach 0.999 acc, or decide that is not really useful in our case and go for a smaller/faster model ?