Unispac / Visual-Adversarial-Examples-Jailbreak-Large-Language-Models

Repository for the Paper (AAAI 2024, Oral) --- Visual Adversarial Examples Jailbreak Large Language Models
165 stars 12 forks source link

Question for evaluation #7

Closed ifshine closed 11 months ago

ifshine commented 11 months ago

Thanks for your excellent work! You say "we use the challenging subset of RealToxicityPrompts, which contains 1225 text prompts" in the 4.3 section of your paper. I'm not sure the subset contains how many text prompts.

Unispac commented 11 months ago

Hi,

Thanks for your interest in our work :) By the sentence you quoted, we actually mean the subset contains 1225 text prompts. Sorry for the confusion.

ifshine commented 11 months ago

Thanks for your explanation!