jpzhang1810 / TGR

Official Pytorch implementation for "Transferable Adversarial Attacks on Vision Transformers with Token Gradient Regularization" (CVPR 2023).
24 stars 5 forks source link

参数无法取出 #7

Open Evalution13 opened 6 months ago

Evalution13 commented 6 months ago

当我运行“python attack.py --attack TGR --batch_size 1 --model_name vit_base_patch16_224”命令时,报错为 C:\Users\xj\AppData\Local\Programs\Python\Python310\lib\site-packages\torch\nn\modules\module.py:1359: UserWarning: Using a non-full backward hook when the forward contains multiple autograd Nodes is deprecated and will be removed in future versions. This hook will be missing some grad_input. Please use register_full_backward_hook to get the documented behavior. warnings.warn("Using a non-full backward hook when the forward contains multiple autograd Nodes " Traceback (most recent call last): File "C:\Users\xj\Desktop\TGR-master\attack.py", line 48, in adv_inps, loss_info = attack_method(batch_x, batch_y) File "C:\Users\xj\Desktop\TGR-master\methods.py", line 85, in call images = self.forward(*input, *kwargs) File "C:\Users\xj\Desktop\TGR-master\methods.py", line 304, in forward cost.backward() File "C:\Users\xj\AppData\Local\Programs\Python\Python310\lib\site-packages\torch_tensor.py", line 492, in backward torch.autograd.backward( File "C:\Users\xj\AppData\Local\Programs\Python\Python310\lib\site-packages\torch\autograd__init__.py", line 251, in backward Variable._execution_engine.run_backward( # Calls into the C++ engine to run the backward pass File "C:\Users\xj\AppData\Local\Programs\Python\Python310\lib\site-packages\torch\nn\modules\module.py", line 69, in call return self.hook(module, args, **kwargs) File "C:\Users\xj\Desktop\TGR-master\methods.py", line 202, in mlp_tgr c = grad_in[0].shape[2] IndexError: tuple index out of range 进程已结束,退出代码1, 而当我打印grad_in[0]出来时,发现似乎grad_in[0]只有两列,那么这里shape[2]该如何修改呢?

Evalution13 commented 6 months ago

if self.model_name in ['vit_base_patch16_224', 'pit_b_224', 'cait_s24_224', 'resnetv2_101']: c = grad_in[0].shape[2] out_grad_cpu = out_grad.data.clone().cpu().numpy()

            max_all = np.argmax(out_grad_cpu[0,:,:], axis = 0)
            min_all = np.argmin(out_grad_cpu[0,:,:], axis = 0)
            out_grad[:,max_all,range(c)] = 0.0
            out_grad[:,min_all,range(c)] = 0.0
        for i in range(len(grad_in)):
            if i == 0:
                return_dics = (out_grad,)
            else:
                return_dics = return_dics + (grad_in[i],)
        return return_dics
        事实上针对'vit_base_patch16_224', 'pit_b_224', 'cait_s24_224', 'resnetv2_101',都存在以上问题
dazhonghua119 commented 6 months ago

和我遇到的问题一样,把torch的版本降低,和论文保持一致就可以解决