Harry24k / adversarial-attacks-pytorch

PyTorch implementation of adversarial attacks [torchattacks].
https://adversarial-attacks-pytorch.readthedocs.io/en/latest/index.html
MIT License
1.79k stars 337 forks source link

[BUG] JSMA massive gpu memory consumption #187

Open Dontoronto opened 2 weeks ago

Dontoronto commented 2 weeks ago

✨ Short description of the bug [tl;dr]

today i tried to run jsma on an imagenet sample. Sample had the shape (1,3,224,224). JSMA code stuck a little bit in the approximation and then an error message popped up writing "JSMA needs to allocate 84,.. GiB of gpu memory" while my nvidia only had 6gb. When looking into the code i could see a lot of clones, inits etc. which costs a lot of memory, computation device transfers etc. I think some smarter guys than me could be able to optimize the code to work on lower memory consumption.

πŸ’¬ Detailed code and results

Traceback (most recent call last): File "C:\Users\Domin\anaconda3\envs\NeuronalNetwork\lib\site-packages\torch\utils_contextlib.py", line 115, in decorate_context return func(*args, *kwargs) File "C:\Users\Domin\anaconda3\envs\NeuronalNetwork\lib\site-packages\torchattacks\attacks\jsma.py", line 116, in saliency_map alpha = target_tmp.view(-1, 1, nb_features) + target_tmp.view( File "C:\Users\Domin\anaconda3\envs\NeuronalNetwork\lib\site-packages\torch\utils_device.py", line 78, in __torch_function__ return func(args, **kwargs) torch.OutOfMemoryError: CUDA out of memory. Tried to allocate 84.41 GiB. GPU 0 has a total capacity of 6.00 GiB of which 4.21 GiB is free. Of the allocated memory 704.63 MiB is allocated by PyTorch, and 29.37 MiB is reserved by PyTorch but unallocated. If reserved but unallocated memory is large try setting PYTORCH_CUDA_ALLO C_CONF=expandable_segments:True to avoid fragmentation. See documentation for Memory Management (https://pytorch.org/docs/stable/notes/cuda.html#environment-vari ables)

rikonaka commented 2 weeks ago

Hi @Dontoronto , have you tested your NVIDIA device for other attacks such as PGD or CW? Based on the fact that you're trying to attack imagenet on your only 6GB device, so I'm not quite sure if it's because your GPU is too tiny or if it's a problem with the code.

Dontoronto commented 2 weeks ago

yes i tested it. currently I'm running deepfool and pgd attacks without any problems. the problem occures at this line File "C:\Users\Domin\anaconda3\envs\NeuronalNetwork\lib\site-packages\torchattacks\attacks\jsma.py", line 116, in saliency_map alpha = target_tmp.view(-1, 1, nb_features) + target_tmp.view(

the variable nb_features has 150528 parameters inside, because of the flattened sample from imagenet. I don't know if this is really a bug or my setup is just too low.

rikonaka commented 2 weeks ago

yes i tested it. currently I'm running deepfool and pgd attacks without any problems. the problem occures at this line File "C:\Users\Domin\anaconda3\envs\NeuronalNetwork\lib\site-packages\torchattacks\attacks\jsma.py", line 116, in saliency_map alpha = target_tmp.view(-1, 1, nb_features) + target_tmp.view(

the variable nb_features has 150528 parameters inside, because of the flattened sample from imagenet. I don't know if this is really a bug or my setup is just too low.

Roger that, I'm going to do some testing and debugging to try to find the problem and fix it! 😘

Dontoronto commented 2 weeks ago

i would like to give more information but my computer is currently generating a deepfool dataset. Thank you very much!πŸ˜ƒ

rikonaka commented 2 weeks ago

i would like to give more information but my computer is currently generating a deepfool dataset. Thank you very much!πŸ˜ƒ

It seem that I have found the cause of the problem, due to an overly large dimension of the input tensor in the calculation of the Jacobi matrix.

def compute_jacobian(model, x):
    def model_forward(input):
        return model(input)
    jacobian = torch.autograd.functional.jacobian(model_forward, x)
    return jacobian

In the above code, even if I just input 3 images (from ImageNet), its GPU memory usage reaches 11G, and 5 => 16G, 6 => 36G.

5 images

So even if batch_size is set to 10, it still requires close to 80 GB+ of GPU memory on ImageNet dataset.

I'll try to improve the algorithm and try to make it work on ImageNet!

Dontoronto commented 1 week ago

you are awesome! i really appreciate your effort :)

rikonaka commented 5 days ago

Hi @Dontoronto , on a bad note, I've been trying to reduce memory consumption on ImageNet for a while now and have rewritten the whole code for the JSMA attack https://github.com/Harry24k/adversarial-attacks-pytorch/pull/168/commits/8c065ecf998226429b53aed27bc8f6591ea287d7, but I've found that this seems to be an unattainable goal.

Here are my reasons why.

First, according to the original JSMA attack paper, Algorithm 2 and Algorithm 3

Alg.2

The JSMA attack will try to travel all (p1, p2) pairs of tau, and the tau in ImageNet is 3 * 224 * 224, a very large number, so for p1 and p2, there will be (3 * 224 * 224)^2 combinations to look up. On a very small dataset, this lookup is possible, but on ImageNet, this lookup matrix will be unbelievably huge leading to a huge consumption of GPU memory.

Alg.3

Second, when computing SM (Saliency Map), we need to run the addition once for each element in the matrix, an operation that is O(n^2) memory consuming. You read that right, it is indeed O(n^2). I think that's kind of an inherent disadvantage of JSMA. 10 ImageNet images will be (10, 150528) => (10, 150528, 150528), backward propagation on such a large matrix is very memory intensive.

Eq9

Eq10

In the end, this is actually not bug, and if you are planning to run JSMA attacks on ImageNet, as my experimental equipment is not wireless, I tried to run it on a server with 150GB of RAM but couldn't succeed with these attacks, you can try a server with more than 200GB of RAM πŸ˜‚. If you're successful remember to get back to me on how much RAM you ended up using on the server!

Dontoronto commented 4 days ago

@rikonaka Sorry for causing you so much work. All things you mentioned sound plausible. I just stepped over this while generating samples for my thesis. Unfortunately I just have a 6gb gpu and can't use jsma for the imagenet case. I try to use OnePixel to get L0 attacks. Thank you very much! Do I need to close this issue or do you close it? Idk if you still have something in your mind regarding this :)

rikonaka commented 4 days ago

You can close this issues, if I have an update I'll comment below!πŸ‘