FLAIR-THU / PairedLogitsInversion

Implementation of "Breaching FedMD: Image Recovery via Paired-Logits Inversion Attack" (CVPR 2023)
4 stars 0 forks source link

Ask about attack accuracy #1

Closed Woting-jada closed 6 months ago

Woting-jada commented 6 months ago

Hello Author. When I --fedkd_type FEDKD_TYPE is set to FedMD and the dataset is LAG, I get a result of 91 for pli_success, and I would like to ask about the way to reproduce the results of the paper about attack accuracy.

Koukyosyumei commented 6 months ago

Hi! Thanks for your interest in our work!! Could you share the entire results? If you can also provide the optional values you use, it is much more helpful.

Woting-jada commented 6 months ago

Hello,Thank you for your response and for your interest in our work! Here are the complete results and parameter settings you inquired about: Args:

model_type: CNN invmodel_type: InvCNN num_communication: 5 batch_size: 64 inv_batch_size: 8 lr: 0.001 num_workers: 1 num_classes: 1000 inv_epoch: 3 inv_lr: 3e-05 loss_type: mse dataset: LAG fedkd_type: FedMD attack_type: pli client_num: 10 alpha: 5.0 ablation_study: 2 inv_temperature: 3.0 client_threshold: 0.25 server_threshold: 0.5 config_dataset: height: 64, width: 64, crop: True, channel: 3, data_folder: ./data/lag, target_celebrities_num: 200 config_fedkd: consensus_epoch: 1, revisit_epoch: 1, transfer_epoch_private: 5, transfer_epoch_public: 5, server_training_epoch: 1, use_server_model: True, weight_decay: 0.0001 random_seed: 42 gamma: 0.03 only_sensitive: 1 use_multi_models: 1 Results:

pli_success: 91 pli_too_close_to_public: 25 pli_ssim_private_mean: 0.39554589048027994 pli_ssim_private_std: 0.11281323336787769 pli_ssim_public_mean: nan pli_ssim_public_std: nan pli_mse_private_mean: 0.4999686150252819 pli_mse_private_std: 0.22810010115995788 pli_mse_public_mean: nan pli_mse_public_std: nan

Koukyosyumei commented 6 months ago

Something seems to have gone wrong in your dataset. ssim and mse should not be nan. pli_success: 91, meaning the attack accuracy of 45.5, also seems too low. I quickly ran the experiment on Google Colab, and the attack accuracy was almost the same.

https://colab.research.google.com/drive/1CSPFtCuDTW0DHXOSqjnQFN5-EHWef66k?usp=sharing

Woting-jada commented 6 months ago

Thank you very much for your response. After downloading the LAG dataset from a different source, the results are very similar to yours.