Open anupamme opened 4 years ago
The experiment setup we want to test is:
Tasks:
Task 1: CIFAR10 (Image Classification) | VGG-9 Task 2: EMNIST (Digit Classification) | LeNet Task 3: Sentiment 140 (Sentiment Classification) | LSTM Task 4: Reddit Dataset (Next word prediction) | LSTM
Sentiment and Reddit dataset can be taken from here: https://github.com/TalwalkarLab/leaf/tree/master/data
hey! i am getting this error when i run run_simulated_averaging.sh file
Traceback (most recent call last):
File "simulated_averaging.py", line 165, in
It is perhaps because of your pytorch version
It is running fine for me and I am using this:
torch==1.5.1 torchfile==0.1.0 torchvision==0.6.1
We want to measure how do black-box attacks do against KRUM and MultiKrum defenses. For reading see the last paragraph of page 9 (of the paper). I am quoting the excerpt:
"Since the black-box attack does not have any norm difference constraint, training over the poisoned dataset usually leads to a large norm difference. Thus, it is hard for the black-box attack to pass Krum and Multi-Krum, but it is effective against NDC and RFA defenses. This is presumably because the attacker can still slowly inject a part of the backdoor via a series of attacks."
We want to verify these claims.