SaFoLab-WISC / AdaShield

[ECCV 2024] The official code for "AdaShield: Safeguarding Multimodal Large Language Models from Structure-based Attack via Adaptive Shield Prompting."
37 stars 1 forks source link

AdaShield: Safeguarding Multimodal Large Language Models from Structure-based Attack via Adaptive Shield Prompting

If you like our project, please give us a star ⭐ on GitHub for latest update.
[![arxiv](https://img.shields.io/badge/Arxiv-2403.09513-red)](https://arxiv.org/pdf/2403.09513.pdf) [![project](https://img.shields.io/badge/Project-Page-blue)](https://rain305f.github.io/AdaShield-Project/) ![Jailbreak Attacks](https://img.shields.io/badge/Jailbreak-Attacks-yellow.svg?style=plastic) ![Defenses](https://img.shields.io/badge/MultimodalLargeLanguage-Defenses-orange.svg?style=plastic) ![Multimodal Large Language Models](https://img.shields.io/badge/MultimodalLargeLanguage-Models-green.svg?style=plastic) The official implementation of our paper "[AdaShield: Safeguarding Multimodal Large Language Models from Structure-based Attack via Adaptive Shield Prompting](https://arxiv.org/abs/2403.09513)", by *[Yu Wang*](https://rain305f.github.io/), [Xiaogeng Liu*](https://sheltonliu-n.github.io), [Yu Li](https://yu-li.github.io/), [Muhao Chen](https://muhaochen.github.io), and [Chaowei Xiao](https://xiaocw11.github.io).* (* denotes the equal contribution in this work.) ## πŸ“° News | Date | Event | |------------|----------| | **2024/07/01** | πŸ”₯ Our paper is accepted by ECCV2024.| | **2024/03/30** | We have released our full training and inference codes.| | **2024/03/15** | We have released our paper. Thanks to all collaborators. | ## πŸ’‘ Abstract With the advent and widespread deployment of Multimodal Large Language Models (MLLMs), the imperative to ensure their safety has become increasingly pronounced. However, with the integration of additional modalities, MLLMs are exposed to new vulnerabilities, rendering them prone to structured-based jailbreak attacks, where semantic content (e.g., ``harmful text'') has been injected into the images to mislead MLLMs. In this work, we aim to defend against such threats. Specifically, we propose **Ada**ptive **Shield** Prompting ( **AdaShield** ), which prepends inputs with defense prompts to defend MLLMs against structure-based jailbreak attacks without fine-tuning MLLMs or training additional modules (e.g., post-stage content detector). Initially, we present a manually designed static defense prompt, which thoroughly examines the image and instruction content step by step and specifies response methods to malicious queries. Furthermore, we introduce an adaptive auto-refinement framework, consisting of a target MLLM and a LLM-based defense prompt generator (Defender). These components collaboratively and iteratively communicate to generate a defense prompt. Extensive experiments on the popular structure-based jailbreak attacks and benign datasets show that our methods can consistently improve MLLMs' robustness against structure-based jailbreak attacks without compromising the model's general capabilities evaluated on standard benign tasks. ## ⚑ Framework
AdaShield consists of a defender $D$ and a target MLLM $M$, where $D$ aims to generate the defense prompt $P$ that safeguards $M$ from malicious queries. Then, $P$ is put into $M$ to generate response $R$ for the current malicious query. $D$ uses the previously failed defense prompts and the jailbreak response from $M$ as feedback, and iteratively refines the defense prompt in a chat format.
A conversation example from AdaShield between the target MLLM $M$ and defender $D$. The objective of defender $D$ is to safeguard $M$ from harmful queries for the Sex scenario. $D$ generates the failed prompt to defend against the malicious query for the first time. Then, with the jailbreak response from $M$ and previous defense prompt as feedback, $D$ successfully optimizes defense prompts by injecting the safe rules about the sex scenario, and outputs a reason to elicit interpretability.
## πŸš€ Main Results ### Defense Effectiveness and Benign Evaluation

## πŸ› οΈ Requirements and Installation * Python >= 3.10 * Pytorch == 2.1.2 * CUDA Version >= 12.1 * Install required packages: ```bash git clone https://github.com/rain305f/AdaShield cd AdaShield conda create -n adashield python=3.10 -y conda activate adashield pip install -r requirement.txt # install environments about llava git clone https://github.com/haotian-liu/LLaVA.git cd LLaVA pip install --upgrade pip pip install -e . cd .. rm -r LLaVA ``` ## πŸ—οΈ Training & Validating The training & validating instruction is in [TRAIN_AND_VALIDATE.md](TRAIN_AND_VALIDATE.md). ## πŸ‘ Acknowledgement * [FigStep Attack](https://github.com/ThuCCSLab/FigStep) Effective jailbreak attack for multi-modal large language models, and great job contributing the evaluation code and dataset. * [MM-SafetyBench(QR Attack)](https://github.com/isXinLiu/MM-SafetyBench) Effective jailbreak attack for multi-modal large language models, and great job contributing the evaluation code and dataset. ## πŸ™Œ Related Projects * [AutoDAN](https://github.com/SheltonLiu-N/AutoDAN) Powerful jailbreak attack strategy on Aligned Large Language Models. * [BackdoorAlign](https://jayfeather1024.github.io/Finetuning-Jailbreak-Defense/) Effective backdoor defense method against Fine-tuning Jailbreak Attack. ## πŸ“‘ Citation Please consider citing πŸ“‘ our papers if our repository is helpful to your work, thanks sincerely! ```bibtex @article{wang2024adashield, title={Adashield: Safeguarding multimodal large language models from structure-based attack via adaptive shield prompting}, author={Wang, Yu and Liu, Xiaogeng and Li, Yu and Chen, Muhao and Xiao, Chaowei}, journal={arXiv preprint arXiv:2403.09513}, year={2024} } ```