thu-ml / Attack-Bard

85 stars 6 forks source link

How to run on multiple GPUs? #1

Closed ZiruiSongBest closed 1 year ago

ZiruiSongBest commented 1 year ago

Hi! Thanks for your greate work. I'm trying to utilize your project and was wondering how to run it in a multi-GPU environment. Addictionaly, What are the GPU memory requirements? Could you provide any recommended methods or steps to achieve this? Are there any specific documents or examples I can refer to?

Thank you so much for your assistance!

huanranchen commented 1 year ago

Hi! Thank you for your interest. However, our code does not support ddp, we only support “pipeline parallel” written by ourself. For GPU memory requirements: image embedding attack: about 24G vlm attack: (depends on surrogate models. Minigpt4: about 24G. InstructBlip: about 32G. Blip2: about 24G)

If you want to modify the "pipeline parallel", please refer to "./attacks/AdversarialInput/AdversarialInputBase.py" It's not recommended to use DDP because it may need to modify lots of codes.

Yuancheng-Xu commented 12 months ago

Hi! I wonder what is the GPU memory requirements to run the ensemble attack? Is it correct that if Minigpt4+InstructBlip+Blip2 are used, then the GPU requirement will be 24+32+24 = 80G?

Thank you!

huanranchen commented 12 months ago

Hi! I wonder what is the GPU memory requirements to run the ensemble attack? Is it correct that if Minigpt4+InstructBlip+Blip2 are used, then the GPU requirement will be 24+32+24 = 80G?

Thank you!

Hi! You are correct~ It requires about 80G to run Ensemble Text Description Attack. You can either run it on a single GPU (requires 1 80G GPU), or run it distributedly on 3 different GPUs (requires 3x40G GPU).

The Image Embedding Attack is more effective than Text Description Attack, while requires less GPU memory. So I strongly recommend to use Image Embedding Attack.

We also provide some crafted adversarial examples here https://github.com/thu-ml/Attack-Bard/tree/main/dataset/ssa-cwa-200

Yuancheng-Xu commented 12 months ago

Thanks for replying Huanran! Another question: I wonder if you have any success on transferring Text Description Attack images to GPT-4V. In the paper only the success rate of image embedding attack is reported.

huanranchen commented 12 months ago

Thanks for replying Huanran! Another question: I wonder if you have any success on transferring Text Description Attack images to GPT-4V. In the paper only the success rate of image embedding attack is reported.

We haven't tested the Text Description Attack sucess rate against GPT-4V, because our paper was made public before GPT-4V... We will test it and revise our paper soon~

Yuancheng-Xu commented 11 months ago

Hi!

I am running the image embedding attack via CUDA_VISIBLE_DEVICES=0,1,2 python attack_img_encoder_misdescription.py (as in the readme) and encounter the following error. I wonder what the issue is. Does the code work with multiple GPU?

`Traceback (most recent call last): File "/fs/nexus-projects/HuangWM/yc/Attack-Bard/attack_img_encoder_misdescription.py", line 48, in ssa_cw_loss.set_ground_truth(x) File "/cmlscratch/xic/anaconda3/envs/minigptv/lib/python3.9/site-packages/torch/utils/_contextlib.py", line 115, in decorate_context return func(*args, kwargs) File "/fs/nexus-projects/HuangWM/yc/Attack-Bard/surrogates/FeatureExtractors/Base.py", line 46, in set_ground_truth self.ground_truth.append(model(x)) File "/cmlscratch/xic/anaconda3/envs/minigptv/lib/python3.9/site-packages/torch/nn/modules/module.py", line 1501, in _call_impl return forward_call(args, kwargs) File "/fs/nexus-projects/HuangWM/yc/Attack-Bard/surrogates/FeatureExtractors/ViT.py", line 33, in forward outputs = self.model(inputs) File "/cmlscratch/xic/anaconda3/envs/minigptv/lib/python3.9/site-packages/torch/nn/modules/module.py", line 1501, in _call_impl return forward_call(args, kwargs) File "/cmlscratch/xic/anaconda3/envs/minigptv/lib/python3.9/site-packages/transformers/models/vit/modeling_vit.py", line 573, in forward embedding_output = self.embeddings( File "/cmlscratch/xic/anaconda3/envs/minigptv/lib/python3.9/site-packages/torch/nn/modules/module.py", line 1501, in _call_impl return forward_call(*args, kwargs) File "/cmlscratch/xic/anaconda3/envs/minigptv/lib/python3.9/site-packages/transformers/models/vit/modeling_vit.py", line 122, in forward embeddings = self.patch_embeddings(pixel_values, interpolate_pos_encoding=interpolate_pos_encoding) File "/cmlscratch/xic/anaconda3/envs/minigptv/lib/python3.9/site-packages/torch/nn/modules/module.py", line 1501, in _call_impl return forward_call(*args, *kwargs) File "/cmlscratch/xic/anaconda3/envs/minigptv/lib/python3.9/site-packages/transformers/models/vit/modeling_vit.py", line 181, in forward embeddings = self.projection(pixel_values).flatten(2).transpose(1, 2) File "/cmlscratch/xic/anaconda3/envs/minigptv/lib/python3.9/site-packages/torch/nn/modules/module.py", line 1501, in _call_impl return forward_call(args, kwargs) File "/cmlscratch/xic/anaconda3/envs/minigptv/lib/python3.9/site-packages/torch/nn/modules/conv.py", line 463, in forward return self._conv_forward(input, self.weight, self.bias) File "/cmlscratch/xic/anaconda3/envs/minigptv/lib/python3.9/site-packages/torch/nn/modules/conv.py", line 459, in _conv_forward return F.conv2d(input, weight, bias, self.stride,

RuntimeError: Expected all tensors to be on the same device, but found at least two devices, cuda:0 and cuda:2! (when checking argument for argument weight in method wrapper_CUDA__cudnn_convolution)`

huanranchen commented 11 months ago

Hi!

I am running the image embedding attack via CUDA_VISIBLE_DEVICES=0,1,2 python attack_img_encoder_misdescription.py (as in the readme) and encounter the following error. I wonder what the issue is. Does the code work with multiple GPU?

`Traceback (most recent call last): File "/fs/nexus-projects/HuangWM/yc/Attack-Bard/attack_img_encoder_misdescription.py", line 48, in ssa_cw_loss.set_ground_truth(x) File "/cmlscratch/xic/anaconda3/envs/minigptv/lib/python3.9/site-packages/torch/utils/_contextlib.py", line 115, in decorate_context return func(*args, kwargs) File "/fs/nexus-projects/HuangWM/yc/Attack-Bard/surrogates/FeatureExtractors/Base.py", line 46, in set_ground_truth self.ground_truth.append(model(x)) File "/cmlscratch/xic/anaconda3/envs/minigptv/lib/python3.9/site-packages/torch/nn/modules/module.py", line 1501, in _call_impl return forward_call(args, kwargs) File "/fs/nexus-projects/HuangWM/yc/Attack-Bard/surrogates/FeatureExtractors/ViT.py", line 33, in forward outputs = self.model(inputs) File "/cmlscratch/xic/anaconda3/envs/minigptv/lib/python3.9/site-packages/torch/nn/modules/module.py", line 1501, in _call_impl return forward_call(args, kwargs) File "/cmlscratch/xic/anaconda3/envs/minigptv/lib/python3.9/site-packages/transformers/models/vit/modeling_vit.py", line 573, in forward embedding_output = self.embeddings( File "/cmlscratch/xic/anaconda3/envs/minigptv/lib/python3.9/site-packages/torch/nn/modules/module.py", line 1501, in _call_impl return forward_call(*args, kwargs) File "/cmlscratch/xic/anaconda3/envs/minigptv/lib/python3.9/site-packages/transformers/models/vit/modeling_vit.py", line 122, in forward embeddings = self.patch_embeddings(pixel_values, interpolate_pos_encoding=interpolate_pos_encoding) File "/cmlscratch/xic/anaconda3/envs/minigptv/lib/python3.9/site-packages/torch/nn/modules/module.py", line 1501, in _call_impl return forward_call(*args, *kwargs) File "/cmlscratch/xic/anaconda3/envs/minigptv/lib/python3.9/site-packages/transformers/models/vit/modeling_vit.py", line 181, in forward embeddings = self.projection(pixel_values).flatten(2).transpose(1, 2) File "/cmlscratch/xic/anaconda3/envs/minigptv/lib/python3.9/site-packages/torch/nn/modules/module.py", line 1501, in _call_impl return forward_call(args, kwargs) File "/cmlscratch/xic/anaconda3/envs/minigptv/lib/python3.9/site-packages/torch/nn/modules/conv.py", line 463, in forward return self._conv_forward(input, self.weight, self.bias) File "/cmlscratch/xic/anaconda3/envs/minigptv/lib/python3.9/site-packages/torch/nn/modules/conv.py", line 459, in _conv_forward return F.conv2d(input, weight, bias, self.stride,

RuntimeError: Expected all tensors to be on the same device, but found at least two devices, cuda:0 and cuda:2! (when checking argument for argument weight in method wrapper_CUDA__cudnn_convolution)`

Thank you for pointing this out! We have already fixed this problem. Thank you~