Hi, I run the imgJP-based Jailbreak(Multiple Harmful Behaviors) method for MiniGPT-4(LLaMA2) attacks, Run the provided code python v1_mprompt.py --cfg-path eval_configs/minigpt4_llama2_eval.yaml --gpu-id 0, However, the ASR results shown by the code in training and testing are only 0.44(11/25) and 0.37(37/100) respectively, but 0.88 and 0.92 respectively in the paper. Why is that
Hi, I run the imgJP-based Jailbreak(Multiple Harmful Behaviors) method for MiniGPT-4(LLaMA2) attacks, Run the provided code python v1_mprompt.py --cfg-path eval_configs/minigpt4_llama2_eval.yaml --gpu-id 0, However, the ASR results shown by the code in training and testing are only 0.44(11/25) and 0.37(37/100) respectively, but 0.88 and 0.92 respectively in the paper. Why is that