jinyuan-jia / BadEncoder

75 stars 13 forks source link

Code for traing backdoored CLIP? #3

Closed Oklahomawhore closed 7 months ago

Oklahomawhore commented 8 months ago

Hello dear auther, I'm doing backdoor research against self contrastive learning and I find your work very helpful, but in your paper you mentioned that you've trained backdoored version of CLIP with CIFAR10 dataset, but I cannot find the pretraining code of CLIP in the repo, is the code available? Thank you very much.

liu00222 commented 7 months ago

Hello, I think you are referring to the code that fine-tunes the original clean CLIP model to a backdoored one.

The code is already in ./badencoder.py. You will need to specify args.encoder_usage_info to "CLIP", specify args.shadow_dataset to "cifar10_224", specify the right path to the clean CLIP encoder, specify the trigger to be "./trigger/trigger_pt_white_173_50_ap_replace.npz" (as CLIP encoder expects the input image size to be 224x224), select the reference input from ./reference/CLIP/*, and modify other necessary settings (e.g., learning rate, batch size) as discussed in our paper.