as791 / ZOO_Attack_PyTorch

This repository contains the PyTorch implementation of Zeroth Order Optimization Based Adversarial Black Box Attack (https://arxiv.org/abs/1708.03999)
MIT License
34 stars 13 forks source link

Can you write a pytorch version of zoo for imagenet datasets? #3

Closed Kitzzaaa closed 2 years ago

Kitzzaaa commented 2 years ago

Can you write a pytorch version of zoo for imagenet datasets?Thanks very much

as791 commented 2 years ago

Hey, @Kitzzaaa Can let me know why this issue was closed? I have not yet coded that part. I will try my best to do it ASAP.

Kitzzaaa commented 2 years ago

thank you! I am looking forward to seeing the code.

Kitzzaaa commented 2 years ago

Hi,Do you finish it?

as791 commented 2 years ago

@Kitzzaaa Sorry, I am busy for few weeks, will try to do this as soon as possible.

Kitzzaaa commented 2 years ago

I directly in the zoo-cifar10 code to change to imagenet dataset, why the effect is not good, do you have any suggestions? And I think you should first judge whether the original picture can be classified correctly in the generate function in zoo_l2_attack_black.

------------------ 原始邮件 ------------------ 发件人: "as791/ZOO_Attack_PyTorch" @.>; 发送时间: 2021年10月12日(星期二) 晚上7:56 @.>; @.**@.>; 主题: Re: [as791/ZOO_Attack_PyTorch] Can you write a pytorch version of zoo for imagenet datasets? (#3)

@Kitzzaaa Sorry, I am busy for few weeks, will try to do this as soon as possible.

— You are receiving this because you were mentioned. Reply to this email directly, view it on GitHub, or unsubscribe. Triage notifications on the go with GitHub Mobile for iOS or Android.

as791 commented 2 years ago

Hey, @Kitzzaaa applying zoo implemented for smaller images with large images doesn't give desired results, please read the paper for more exact details. It is well explained.

YES, I get your point, you can do that as well that is a more practical approach and although here in the current testing implementation what is done that we take the labels from the test loader itself, so it doesn't matter as even at first iteration only it was misclassified that breaks the model indirectly, as this was originally also misclassified by classifier so. To see the examples of this, in the sample results you can clearly see the cifar10 image of CAT class going to CAT class only after attack i.e. because it is already misclassified.

Although to have pure adversarial examples you should go for the test set which is 100% correctly classified by model first and break the model for them, that is a more practical and better approach. And many recent implementations consider that and I agree with you completely.

Hope you got what you were looking for. I may get time this weekend and try to finish the imagenet implementation if possible. Can't commit although, will try my best.

as791 commented 2 years ago

You can use this function to extract data only for correct classification, corr_pred is the boolean (True/False) map of whether the model gave correct classification or not.

def generate_data(data, corr_pred, samples, targeted, start, vgg):
    inputs = []
    targets = []
    i=0
    cnt=0
    while(cnt<samples):
        if(corr_pred[start+i]):
            if targeted:
                if vgg:
                    seq = random.sample(range(0,1000), 10)
                else:
                    seq = range(data.test_labels.shape[1])

                for j in seq:
                    if (j == np.argmax(data.test_labels[start+i])) and (vgg == False):
                        continue
                    inputs.append(data.test_data[start+i])
                    targets.append(np.eye(data.test_labels.shape[1])[j])
            else:
                inputs.append(data.test_data[start+i])
                targets.append(data.test_labels[start+i])
            cnt+=1
        i+=1

    inputs = np.array(inputs)
    targets = np.array(targets)

    return inputs, targets

@Kitzzaaa FYI: I have finished writing the code edits needed for ImageNet. I will test and update it in few days.

Kitzzaaa commented 2 years ago

thank you very much