privacytrustlab / ml_privacy_meter

Privacy Meter: An open-source library to audit data privacy in statistical and machine learning algorithms.
MIT License
557 stars 99 forks source link

Can't achieve a better accuracy than 0.5121 with the blackbox tutorial: Running the Alexnet CIFAR-100 Attack #36

Closed chris-prenode closed 3 years ago

chris-prenode commented 3 years ago

Hello together,

I'd like to experiment with this tool in a federated learning setting for my master thesis, but I can't achieve better accuracy than 0.5121 with the tutorial using the Blackbox config. So I think I oversee something essential.

My execution of the tutorial:

  1. Followed the setup
  2. Executed the extraction of the pre-trained model like described here
  3. Run the script to download the required data files.
  4. Used the Blackbox config for the attackobj: attackobj = ml_privacy_meter.attack.meminf.initialize( target_train_model=cmodelA, target_attack_model=cmodelA, train_datahandler=datahandlerA, attack_datahandler=datahandlerA, layers_to_exploit=[26], exploit_loss=False, device=None)
  5. Run the attack code: python tutorials/attack_alexnet.py

Besides this execution, I had to do few changes in the datasets/files because they used python2 functions. I commented the original lines of code out to make it transparent what changes I did. I attached them to this issue. Also, I added the following line below the import of matplotlib in LoC 11: matplotlib.use('TkAgg') This was necessary to overcome an error related to my Mac OS.

The terminal output is also attached.

terminal_output.txt

create_cifar100_train.txt preprocess_cifar100.txt

I tried the Whitebox config as well and there I achieved an accuracy of 0.7480 in the first 3 epochs. So, I hope there is just this little thing I oversee in the blackbox setting.

Thank you for supporting this project. All the best from Karlsruhe Germany

amad-person commented 3 years ago

Hi @chris-prenode it looks like you have set up the tool correctly, and the dataset loading scripts you've linked don't seem like they're the cause of low attack accuracy. Please take a look at this reply of mine on another issue: https://github.com/privacytrustlab/ml_privacy_meter/issues/31#issuecomment-801788840. Let me know if this helps.

I will also try running the attacker config you mentioned and see if I get a higher accuracy on my machine.

chris-prenode commented 3 years ago

Hi @amad-person I'll apply your suggestions in the planned experiments and will present you how those procedures will affect the attack accuracy. At the moment I preparing my experiment environment: Datasets, Models, FL-System, Learning procedure and ML-Privacy-Meter to observe what impact FLS will have on the different settings (dataset, model) regarding the model vulnerability. Therefore I wanted to orientate on the tutorial results as a first shot. To be sure that I used ML-Privacy-Meter correctly. So thanks for running the attacker config I mentioned on your machine to make it comparable for me!

Have a great weekend!

chris-prenode commented 3 years ago

Hi @amad-person, could you achieve higher accuracy on your machine?

Greets Chris

amad-person commented 3 years ago

Hi, I also achieved similar accuracy on my local machine as you reported for the blackbox case.

I think you've set up ML Privacy Meter correctly, and it is a matter of tuning the experiment parameters (training the target model, using lesser number of training samples, etc).

chris-prenode commented 3 years ago

Alright thank you @amad-person. I achieve better results through the hints and tips you referenced. Thank you for your support.