aks2203 / poisoning-benchmark

A unified benchmark problem for data poisoning attacks
https://arxiv.org/abs/2006.12557
MIT License
146 stars 21 forks source link

White-box setting #6

Closed 9yte closed 3 years ago

9yte commented 3 years ago

Hi,

First of all, I want to appreciate your amazing work on benchmarking this field of research. That's indeed very valuable.

I have some doubts, which might seem naive, but I wanted to be 100% sure before proceeding with your benchmark.

In the transfer learning settings, does white-box mean that the victim uses the exact same model that is used by the attacker? Or the architecture is only the same (i.e., the parameters of the model is different).

A somehow related question, in learning_module.py you set the address model_paths['cifar10']['whitebox'] as ResNet18_CIFAR100_A.pth, but in the pertained models that you have shared with us, there is no model named that. In particular, we have a model named ResNet18_CIFAR100.pth. Is that the model you are referring in learning_module.py? I.e., is that the model that you have used to craft poison samples? If yes, is that the same model that is used by the victim in the white-box setting?

Thanks a lot in advance for answering my questions.

aks2203 commented 3 years ago

Hi,

Thank you for pointing out the incorrect path. It has been fixed. See this commit.

In the white-box setting, the attacker has the exact parameters of the victim model. The correction in learning_module.py should reflect that.

Is this helpful? May we close the issue?

-Avi

9yte commented 3 years ago

Dear Avi,

Thanks for your help. That answered my question. :)

best, Hojjat

aks2203 commented 3 years ago

Wonderful!