abc03570128 / Jailbreaking-Attack-against-Multimodal-Large-Language-Model

36 stars 6 forks source link

Questions about the results of the code run #3

Open guokun111 opened 2 months ago

guokun111 commented 2 months ago

Hi, I run the imgJP-based Jailbreak(Multiple Harmful Behaviors) method for MiniGPT-4(LLaMA2) attacks, Run the provided code python v1_mprompt.py --cfg-path eval_configs/minigpt4_llama2_eval.yaml --gpu-id 0, However, the ASR results shown by the code in training and testing are only 0.44(11/25) and 0.37(37/100) respectively, but 0.88 and 0.92 respectively in the paper. Why is that image

WaterDropjack commented 1 week ago

Have you solved the problem?