rkteddy / channel-Lipschitzness-based-pruning

Source code for ECCV 2022 Poster: Data-free Backdoor Removal based on Channel Lipschitzness
27 stars 4 forks source link

Compability with Backdoor Bench models? #3

Open CuriousCatCoder opened 1 year ago

CuriousCatCoder commented 1 year ago

Dear authors,

BackdoorBench (comprehensive benchmark of backdoor attack and defense methods) has updated its GitHub with your CLP Defense method.

I trained several attacks (BadNet, WaNet) using Backdoor Bench, and then tried to defend those models with CLP. I noticed that Backdoor Bench uses PreAct-ResNet18 rather than ResNet18, so I replaced the CLP original code with the PreAct-ResNet18 version as suggested. However, the results are still not as good as I hope. Can you provide some insights?

Command to train 2 attacks python ./attack/badnet_attack.py --yaml_path ../config/attack/badnet/cifar10.yaml --dataset cifar10 --dataset_path ../data --save_folder_name badnet_0_1 python ./attack/badnet_attack.py --yaml_path ../config/attack/badnet/cifar10.yaml --dataset cifar10 --dataset_path ../data --save_folder_name wanet_0_1

Then defense using CLP python ./defense/clp/clp.py --result_file badnet_0_1 --yaml_path ./config/defense/clp/cifar10.yaml --dataset cifar10 python ./defense/clp/clp.py --result_file wanet_0_1 --yaml_path ./config/defense/clp/cifar10.yaml --dataset cifar10

Original BadNet u=4 u=3 u=2 u=1
ACC 91.81 91.18 90.92 81.86 24.57
ASR 95.60 95.87 93.66 62.88 85.89
Original WaNet u=4 u=3 u=2 u=1
ACC 90.71 91.04 91.08 87.61 81.28
ASR 94.59 82.77 47.67 11.72 1.19

For your convenience, I have uploaded the two pre-trained models here: BadNet, WaNet, you just need to put them in the /record/ folder

RJ-T commented 1 year ago

Thank you for your interest of our work. We found this issue highly related to the network architecture of PreAct-ResNet. In the earlier version, we ignored the connection between shortcuts and blocks in pruning and thus the results did not meet the performance expectations under BadNet attack. We are working on it and an updated version of CLP against to PreAct-ResNet will be released after we finish the test.

rkteddy commented 1 year ago

Hi :) Sorry for the late reply. After taking into consideration the possibility of different architectures being incompatible with our Conv-BN assumption, we have come to the conclusion that it would be universally applicable to prune the convolutional layer and batch normalization layer separately. As a result, we have made some changes to our code, which now looks like this:

def CLP(net, args):
    params = net.state_dict()
    for name, m in net.named_modules():
        if isinstance(m, nn.Conv2d):
            channel_lips = []
            for idx in range(m.weight.shape[0]):
                weight = m.weight[idx]
                weight = weight.reshape(weight.shape[0], -1).cpu()
                channel_lips.append(torch.svd(weight)[1].max())

            channel_lips = torch.Tensor(channel_lips)

            index = torch.where(channel_lips>channel_lips.mean() + args.u*channel_lips.std())[0]

            params[name+'.weight'][index] = 0
            print(index)

        elif isinstance(m, nn.BatchNorm2d):
            std = m.running_var.sqrt()
            weight = m.weight
            channel_lips = (weight / std).abs()

            index = torch.where(channel_lips>channel_lips.mean() + args.u*channel_lips.std())[0]

            params[name+'.weight'][index] = 0
            params[name+'.bias'][index] = 0
            print(index)

    net.load_state_dict(params)
    return net

The clean accuracy may be slightly reduced because the new version prunes the layer twice as much as the previous version. Please feel free to try the new version and do not hesitate to reach out to us if you require any further assistance.

Alpha-Luo commented 2 months ago

Does your method(i.e., CLP) work on other network structures and backdoor attack types?