Open jiahuigeng opened 2 years ago
main_fed.py --dataset=MNIST --model=cnn --alpha=1 --num_users=10 --local_ep=5
Model architecture: Mnistcnn( (conv1): Conv2d(1, 32, kernel_size=(5, 5), stride=(1, 1)) (conv2): Conv2d(32, 64, kernel_size=(5, 5), stride=(1, 1)) (fc1): Linear(in_features=1024, out_features=512, bias=True) (fc2): Linear(in_features=512, out_features=128, bias=True) (fc3): Linear(in_features=128, out_features=10, bias=True) ) Aggregation over all clients
y_loss_all = torch.tensor(y_loss_all).to(self.args.device)
Prediction loss based source inference attack accuracy: 176/1000 (17.60%)
Round 0, Average training loss 0.157
Prediction loss based source inference attack accuracy: 175/1000 (17.50%)
Round 1, Average training loss 0.051
Prediction loss based source inference attack accuracy: 157/1000 (15.70%)
Round 2, Average training loss 0.026
Prediction loss based source inference attack accuracy: 162/1000 (16.20%)
Round 3, Average training loss 0.018
Prediction loss based source inference attack accuracy: 153/1000 (15.30%)
Round 4, Average training loss 0.015
Prediction loss based source inference attack accuracy: 156/1000 (15.60%)
Round 5, Average training loss 0.011
Prediction loss based source inference attack accuracy: 152/1000 (15.20%)
Round 6, Average training loss 0.009
Prediction loss based source inference attack accuracy: 141/1000 (14.10%)
Round 7, Average training loss 0.008
Prediction loss based source inference attack accuracy: 157/1000 (15.70%)
Round 8, Average training loss 0.008
Prediction loss based source inference attack accuracy: 138/1000 (13.80%)
Round 9, Average training loss 0.004
Prediction loss based source inference attack accuracy: 140/1000 (14.00%)
Round 10, Average training loss 0.004
Prediction loss based source inference attack accuracy: 129/1000 (12.90%)
Round 11, Average training loss 0.005
Prediction loss based source inference attack accuracy: 137/1000 (13.70%)
Round 12, Average training loss 0.003
Prediction loss based source inference attack accuracy: 136/1000 (13.60%)
Round 13, Average training loss 0.002
Prediction loss based source inference attack accuracy: 135/1000 (13.50%)
Round 14, Average training loss 0.003
Prediction loss based source inference attack accuracy: 136/1000 (13.60%)
Round 15, Average training loss 0.002
Prediction loss based source inference attack accuracy: 119/1000 (11.90%)
Round 16, Average training loss 0.002
Prediction loss based source inference attack accuracy: 129/1000 (12.90%)
Round 17, Average training loss 0.002
Prediction loss based source inference attack accuracy: 126/1000 (12.60%)
Round 18, Average training loss 0.001
Prediction loss based source inference attack accuracy: 129/1000 (12.90%)
Round 19, Average training loss 0.000
Experimental details: Model : cnn Optimizer : sgd Learning rate: 0.01 Global Rounds: 20
Federated parameters: MNIST dataset, has 10 classes Level of non-iid data distribution: α = 1.0 Number of users : 10 Local Batch size : 12 Local Epochs : 5
Experimental result summary: Training accuracy of the joint model: 99.96 Testing accuracy of the joint model: 99.41 Random guess baseline of source inference : 10.00 Highest prediction loss based source inference accuracy: 17.60
python main_fed.py --dataset=Synthetic --model=mlp --alpha=1 --num_users=10 --local_ep=5
Model architecture: MLP( (layer_input): Linear(in_features=60, out_features=200, bias=True) (relu): ReLU() Prediction loss based source inference attack accuracy: 272/1000 (27.20%) Prediction loss based source inference attack accuracy: 256/1000 (25.60%)
Round 1, Average training loss 0.206 Prediction loss based source inference attack accuracy: 266/1000 (26.60%)
Round 5, Average training loss 0.092
Prediction loss based source inference attack accuracy: 255/1000 (25.50%)
Round 6, Average training loss 0.083 Prediction loss based source inference attack accuracy: 274/1000 (27.40%) Round 7, Average training loss 0.076 Prediction loss based source inference attack accuracy: 257/1000 (25.70%) Round 8, Average training loss 0.071 Prediction loss based source inference attack accuracy: 250/1000 (25.00%) Round 9, Average training loss 0.067
Prediction loss based source inference attack accuracy: 257/1000 (25.70%)
Round 10, Average training loss 0.063
Prediction loss based source inference attack accuracy: 231/1000 (23.10%)
Round 11, Average training loss 0.060
Prediction loss based source inference attack accuracy: 219/1000 (21.90%)
Round 12, Average training loss 0.057
Prediction loss based source inference attack accuracy: 230/1000 (23.00%)
Round 13, Average training loss 0.055
Prediction loss based source inference attack accuracy: 234/1000 (23.40%)
Round 14, Average training loss 0.052
Prediction loss based source inference attack accuracy: 225/1000 (22.50%)
Round 15, Average training loss 0.050
Prediction loss based source inference attack accuracy: 211/1000 (21.10%)
Round 16, Average training loss 0.047
Prediction loss based source inference attack accuracy: 225/1000 (22.50%)
Round 17, Average training loss 0.047
Prediction loss based source inference attack accuracy: 214/1000 (21.40%)
Round 18, Average training loss 0.045
Prediction loss based source inference attack accuracy: 239/1000 (23.90%)
Round 19, Average training loss 0.043
Experimental details: Model : mlp Optimizer : sgd Learning rate: 0.01 Global Rounds: 20
Federated parameters: Synthetic dataset, has 10 classes Level of non-iid data distribution: α = 1.0 Number of users : 10 Local Batch size : 12 Local Epochs : 5
Experimental result summary: Training accuracy of the joint model: 92.81 Testing accuracy of the joint model: 92.31 Random guess baseline of source inference : 10.00 Highest prediction loss based source inference accuracy: 29.20