Thank you very much for your excellent work. However, I have a question regarding text-based modality attacks. It appears that open-source T2I models do not have prompt filters deployed. Therefore, does the data in Table 1 only assess whether adversarial prompts can generate images with NSFW content?
Thank you very much for your excellent work. However, I have a question regarding text-based modality attacks. It appears that open-source T2I models do not have prompt filters deployed. Therefore, does the data in Table 1 only assess whether adversarial prompts can generate images with NSFW content?