Closed Simha55 closed 2 months ago
Thanks for raising this issue. There are two differences between non-private and private training: clipping and noising. If you want to have epsilon infinity you can set noise_multiplier=0
in the call to make_private
.
I believe the difference you observe is due to clipping (max_grad_norm
). You can test that by also setting clipping to a very high value or checking what multiple values of clipping yield in terms of accuracy/AUC.
🐛 Bug
I tried 2 variations and trained my models one is setting epsilon to 10^6 and other is training the model simply without opacus. I find that results of model 1 is much better than model 2 in terms of AUROC. I have the following questions. 1) I would like to know the reason for that as setting epsilon to 10^6 is almost equivalent to infinity. 2) Is there a feature in privacy engine where epsilon has an option for infinite value (In other words no DP) rather than manually setting the value. 3) If not what will it be introduced in future and What parameters I need to set in privacy engine to achieve No DP (Like gradient clipping, noise etc.)
Thank you
Please reproduce using our template Colab and post here the link
To Reproduce
1. 2. 3.
Expected behavior
Environment
Please copy and paste the output from our environment collection script (or fill out the checklist below manually).
You can get the script and run it with:
conda
,pip
, source):Additional context