Closed gouthamuoc closed 1 year ago
This pull request was exported from Phabricator. Differential Revision: D40253037
@gouthamuoc Hi and thanks for your contribution! We've indeed missed on updating the documentation.
However, I would say that having a doc with MNIST classifier training to just 20% accuracy is the the point we want to make. We want to show that it is possible to train a decent classifier (90%+ accuracy) with a given privacy parameters.
--sr 0.004
would be equivalent to --batch-size 240
on MNIST (60000 * 0.004 = 240), so I would suggest updating the run commands, but keeping the loss/accuracy data.
If you prefer round numbers, we can do batch size 256 (the loss/accuracy would probably still be within expected noise and won't need to be updated still)
Thanks, I updated accordingly and the test runs give similar accuracy.
This pull request was exported from Phabricator. Differential Revision: D40253037
Summary: sr stands for sampling rate, which is now legacy code. Now, it's just sample_rate = 1 / len(data_loader). This has been fixed in the example and values have been updated.
When running with DP, the following error is thrown
For now, a temporary fix (based on https://github.com/IBM/Project_CodeNet/issues/21#issuecomment-864619383) is to make num_workers=0 in the dataset loaders. This commit does that.
Differential Revision: D40253037