Naiftt / SPAFD

Offical Implementation of the paper Suppressing Poisoning Attacks on Federated Learning for Medical Imaging accepted in MICCAI 2022
5 stars 0 forks source link

Problems while reproducing similar results #3

Closed priyankupadhya17 closed 1 year ago

priyankupadhya17 commented 1 year ago

Hi, I am trying to reproduce the results as mentioned in the paper for HAM10K dataset.

Paper

However when I run the code with the following command: For No Attack

python main.py --method COPOD --numOfAgents 10 --numOfClasses 7 --data noniid_skincancer --modelName ConvSkin --numOfAttacked 0 --local_steps 5 --numOfRounds 250 --seed 2 --lr 0.01 --B 890 --AttackType NoAttack

For 30% Attack

python main.py --method COPOD --numOfAgents 10 --numOfClasses 7 --data noniid_skincancer --modelName ConvSkin -- numOfAttacked 3 --local_steps 5 --numOfRounds 250 --seed 2 --lr 0.01 --B 890 --Attack True --AttackInfo "{1:'random_weight', 5:'random_weight', 7:'random_weight'}"

For 40% Attack

python main.py --method COPOD --numOfAgents 10 --numOfClasses 7 --data noniid_skincancer --modelName ConvSkin --numOfAttacked 4 --local_steps 5 --numOfRounds 250 --seed 2 --lr 0.01 --B 890 --Attack True --AttackInfo "{1:'random_weight',2:'random_weight',8:'scaled_weight100', 9:'opposite_weight0.5'}"

And then I plot the curve, I get the following curve:

MyStep_5

I have kept all the parameters as mentioned in the paper. Were the results in the paper achieved using some different parameters ? Are the commands I am using correct ? Please let me know.

Thanks

Naiftt commented 1 year ago

Hello, the parameters are right. However, due to different seeds you get different results. In the plot you show, you get close results to the paper with lower drops after each communication rounds which is good.

priyankupadhya17 commented 1 year ago

Ok, thank you. Could you let me know which seed was used for the original paper (if you remember).

Another query, even though the seed is different, shouldn't the accuracy of No Attacks be more than or atleast as much as when there are attacks or am I missing something.

Naiftt commented 1 year ago

I'm not sure of the specific seeds that were used as we were trying to make the results with random seeds as well, however, you're getting very similar results to the original experiments. As for the no attacks case, no it's not necessarily higher as this is a byzantine defending aggregation rule, that works under the assumption of potential malicious clients in the system. So the weighted average can be more stable sometimes under the presence of malicious clients. If you check the Krum paper, you will find them talking about something similar to this. I hope that helps!

priyankupadhya17 commented 1 year ago

Thank you so much for valuable information :)