Unispac / Visual-Adversarial-Examples-Jailbreak-Large-Language-Models

Repository for the Paper (AAAI 2024, Oral) --- Visual Adversarial Examples Jailbreak Large Language Models
183 stars 16 forks source link

Question for evaluation #7

Closed ifshine closed 1 year ago

ifshine commented 1 year ago

Thanks for your excellent work! You say "we use the challenging subset of RealToxicityPrompts, which contains 1225 text prompts" in the 4.3 section of your paper. I'm not sure the subset contains how many text prompts.

Unispac commented 1 year ago

Hi,

Thanks for your interest in our work :) By the sentence you quoted, we actually mean the subset contains 1225 text prompts. Sorry for the confusion.

ifshine commented 1 year ago

Thanks for your explanation!