Open ddghjikle opened 3 years ago
Hi, thanks for your interest in this work. We will update the code with the training process ASAP (in a couple of days). We will let you know once it is done.
Hi, it is glad to see several training codes are updated. However, it seems that existing training processes are different from the paper. For example, the model dose not contain a decoder. During training, the encoder does not take target labels as inputs. Similarly, the Code Reconstruction Loss and Image Reconstruction Loss in Figure 3 are not calculated.
@ddghjikle Hi, I am reading this paper too. If I am not mistaken, the training of encoder and the training of backdoor attack are seperated. And the code released at present do not contain the process for training encoder and generating trigger images using encoder. The losses you metioned are the losses for encoder, thus not calculated here. Since the encoder used is StegaStamp, you can find the code in that repository.
Hi, I mean the encoder and decoder are jointly trained, but seperate from training stage of backdoor attack. The workflow is:
And the encoder in encode_image.py
takes secret
and image
as inputs, not two images.
@ddghjikle Hi, I am reading this paper too. If I am not mistaken, the training of encoder and the training of backdoor attack are seperated. And the code released at present do not contain the process for training encoder and generating trigger images using encoder. The losses you metioned are the losses for encoder, thus not calculated here. Since the encoder used is StegaStamp, you can find the code in that repository.
Yes, your understanding is correct. The training of encoder-decoder can be found in the original repo of StegaStamp :)
Besides, we do not use the decoder in the attack stage. It is only used for (jointly) training the encoder, which is further used for generating poisoning images in our attack.
Will you upload the code for the whole process, including the generation of poisoned samples and model training on the poisoned data set.
Will you upload the code for the whole process, including the generation of poisoned samples and model training on the poisoned data set.
Hi, thanks for your attention to our work. You can use our toolbox to run ISSBA directly. Please find more details from our codes and test example.
Hi, thanks very much for sharing the wonderful work and its codes. It is no doubt that this work provides insights of the backdoor-attack area. Recently, we are preparing new works for CVPR2022. In these new works, we want to compare its training processes with that of your work. Therefore, we hope you can provide complete codes such as training parts. Then, we will be able to systematically compare your work with ours and comprehensively discuss advantages of your work.