yuezunli / ISSBA

Invisible Backdoor Attack with Sample-Specific Triggers
85 stars 17 forks source link

Hi, we are preparing backdoor-attack works for CVPR2022. Could you please provide us the training codes? #1

Open ddghjikle opened 3 years ago

ddghjikle commented 3 years ago

Hi, thanks very much for sharing the wonderful work and its codes. It is no doubt that this work provides insights of the backdoor-attack area. Recently, we are preparing new works for CVPR2022. In these new works, we want to compare its training processes with that of your work. Therefore, we hope you can provide complete codes such as training parts. Then, we will be able to systematically compare your work with ours and comprehensively discuss advantages of your work.

yuezunli commented 3 years ago

Hi, thanks for your interest in this work. We will update the code with the training process ASAP (in a couple of days). We will let you know once it is done.

ddghjikle commented 3 years ago

Hi, it is glad to see several training codes are updated. However, it seems that existing training processes are different from the paper. For example, the model dose not contain a decoder. During training, the encoder does not take target labels as inputs. Similarly, the Code Reconstruction Loss and Image Reconstruction Loss in Figure 3 are not calculated.

zeabin commented 3 years ago

@ddghjikle Hi, I am reading this paper too. If I am not mistaken, the training of encoder and the training of backdoor attack are seperated. And the code released at present do not contain the process for training encoder and generating trigger images using encoder. The losses you metioned are the losses for encoder, thus not calculated here. Since the encoder used is StegaStamp, you can find the code in that repository.

zeabin commented 3 years ago

Hi, I mean the encoder and decoder are jointly trained, but seperate from training stage of backdoor attack. The workflow is:

  1. train encoder and decoder jointly (Figure 5)
  2. generate poisoned images using the encoder (Figure 4(a) Attack Stage)
  3. embed a backdoor using the poisoned and benign images (Figure 4(b) Training Stage)

And the encoder in encode_image.py takes secret and image as inputs, not two images.

THUYimingLi commented 2 years ago

@ddghjikle Hi, I am reading this paper too. If I am not mistaken, the training of encoder and the training of backdoor attack are seperated. And the code released at present do not contain the process for training encoder and generating trigger images using encoder. The losses you metioned are the losses for encoder, thus not calculated here. Since the encoder used is StegaStamp, you can find the code in that repository.

Yes, your understanding is correct. The training of encoder-decoder can be found in the original repo of StegaStamp :)

Besides, we do not use the decoder in the attack stage. It is only used for (jointly) training the encoder, which is further used for generating poisoning images in our attack.

v-mipeng commented 2 years ago

Will you upload the code for the whole process, including the generation of poisoned samples and model training on the poisoned data set.

THUYimingLi commented 2 years ago

Will you upload the code for the whole process, including the generation of poisoned samples and model training on the poisoned data set.

Hi, thanks for your attention to our work. You can use our toolbox to run ISSBA directly. Please find more details from our codes and test example.