JonasGeiping / breaching

Breaching privacy in federated learning scenarios for vision and text
MIT License
269 stars 60 forks source link

Questions about dataset configuration and attack initialization #6

Closed anirban-nath closed 1 year ago

anirban-nath commented 1 year ago

In my use case, all aspects of my model are custom and I used a medical image dataset to train my model. I am using the minimal_example.py file as a template for designing the attack but I have a few trivial questions: -

  1. In the mean and std sections of data_cfg_default, what are the three numbers supposed to mean?
  2. For a custom model, what other parameters am I supposed to change (other than model definition, dataset, and loss function)?
JonasGeiping commented 1 year ago

Hi!

  1. These are the dataset mean and std per color channel, used for input normalization. You can set these to (0,0,0), (1,1,1) if your pipeline does not include normalization, or set normalize=False.
  2. Are you asking about settings in the case group, or in general?
anirban-nath commented 1 year ago

Hi, I was asking more in a general sense. I am new to model attacks but I have to perform one within a short amount of time to generate some results.

I also have one other follow-up question. So, the specifics of my model are that it's a multi-task model that generates two separate losses (a dice loss for segmentation and a cross-entropy loss for classification), which I then add to make one loss and then perform backprop throughout the model to train for the two tasks simultaneously.

In this attack implementation, I run into two issues: -

  1. In the regularizers.py file, the class TotalVariation is called where there is an attribute self.groups that is defined as self.groups = 6 if self.double_opponents else 3. This declaration causes an error in diffs = torch.nn.functional.conv2d(tensor, self.weight, None, stride=1, padding=1, dilation=1, groups=self.groups) I get the error that my image is expected to have 3 channels but the code only found one (presumably because my image is in grayscale. As a workaround, I set self.groups=1. Is this correct? Also, what is double_opponents?

  2. Currently the model I am working on is CNN based, but I will be working on a Transformer model after this. I want to perform the APRIL attack on it but the example APRIL file shows that the model there is specified as a very standard Vision Transformer. Is it possible to perform the APRIL attack on a custom transformer model?

JonasGeiping commented 1 year ago
  1. Yes, set groups=1 for grayscale data, and turn off double opponents (which is variant of total variation for color data, see https://link.springer.com/chapter/10.1007/978-3-319-46475-6_40).
  2. The APRIL attack as written in the original publication, works as an analytic attack only for the transformer architecture defined there. In general transformers, the attack has to be adapted to be optimization-based with an analytic part. The original paper has more details.
anirban-nath commented 1 year ago

In general transformers, the attack has to be adapted to be optimization-based with an analytic part. The original paper has more details. Thanks for the heads-up. I will definitely look into the paper. Besides APRIL, are there other attacks that I can perform on custom vision transformers, ideally with little changes to the minimal_example.py code?

JonasGeiping commented 1 year ago

Just to clarify, in principle, all optimization-based attacks in this repo can also be used against vision transformers, as they are general-purpose attacks that do not depend on the architecture of the model. That said, architecture-specific attacks can be stronger. Another vision transformer-specific attack is gradVIT (although not yet implemented in this repo): https://arxiv.org/abs/2203.11894#.

anirban-nath commented 1 year ago

Thank you for your answers.