Azure / PyRIT

The Python Risk Identification Tool for generative AI (PyRIT) is an open access automation framework to empower security professionals and machine learning engineers to proactively find risks in their generative AI systems.
MIT License
1.88k stars 359 forks source link

[Multiple Tasks] FEAT add attack modules from moonshot #376

Open eugeniavkim opened 1 month ago

eugeniavkim commented 1 month ago

Is your feature request related to a problem? Please describe.

Adding in attack modules from Project Moonshot that can be adapted as converters under pyrit.prompt_converter

Describe the solution you'd like

Directly porting over the technique from attack-modules from https://github.com/aiverify-foundation/moonshot-data?tab=readme-ov-file#attack-modules

In order to prevent duplicate work, we can use this task list below to check off completed attack modules as well as commenting on which attack you are working on adapting into PyRIT.

eugeniavkim commented 1 month ago

I will take on the colloquial wordswap attack and mark it completed on the task list once completedšŸ‘

visirion07 commented 1 month ago

I will "attack" Textfooler and Textbugger. WIll mark it completed once done.

KutalVolkan commented 1 month ago

Hi @eugeniavkim ,

I would like to work on Malicious Question Generator and Violent Durian.

I also took a look at the Toxic Sentence Generator and noticed that 22 files have been flagged as unsafe. Just wanted to check with youā€”is it still safe to proceed with this model, or should we apply the same approach used in the Malicious Question Generator as an alternative?

Hereā€™s the link to the files I mentioned: Toxic Sentence Generator.

Looking forward to your thoughts!


romanlutz commented 1 month ago

@KutalVolkan go ahead! Which files are unsafe?

KutalVolkan commented 1 month ago

@KutalVolkan go ahead! Which files are unsafe?

Hello Roman,

Hereā€™s the link and the screenshot I mentioned regarding the unsafe files: Toxic Sentence Generator on Hugging Face.

image
KutalVolkan commented 1 month ago

Hello @romanlutz,

A few additional questions:

  1. Should we create a PR for each converter individually, e.g., for the Malicious Question Generator, or should we wait until all the above attack modules from Project Moonshot are finished before submitting the PR?

Submitting separate PRs might allow for more focused reviews and quicker feedback on each converter, but I'll defer to your preference on how you'd like to handle it.

  1. Regarding Violent Durian, I initially thought it would function more like a strategy inside the Red Teaming Orchestrator. Upon further review, I see that it operates more dynamically by convincing the LLM (prompt target) to take on a criminal persona. The setup involves a multiturn agent that manipulates the LLM into gradually adopting the identity of a criminal (e.g., Zodiac Killer, Ted Bundy) and generating responses as if it were that persona.

This contrasts with a standard converter that mostly modifies the input prompt. In this case, Violent Durian seems to guide a multi-turn conversation, progressively influencing the LLM to respond unethically and act in alignment with the persona.

For example, I plan to integrate this behavior into the Red Teaming Orchestrator by dynamically selecting a criminal persona and applying it to the conversation objective in the YAML-based attack strategy, adapting the YAML to fit the Violent Durian use case.

If you have a different approach or best practices to suggest, Iā€™d be happy to incorporate them. Looking forward to your thoughts šŸ˜€

romanlutz commented 1 month ago

Yes, individual PRs are preferable, unless you're reusing pieces. Even then it's probably better to have them one after the other.

Your idea to use it on the orchestrator level makes sense. Essentially, this would be a new custom attack strategy.

romanlutz commented 1 month ago

@KutalVolkan go ahead! Which files are unsafe?

Hello Roman,

Hereā€™s the link and the screenshot I mentioned regarding the unsafe files: Toxic Sentence Generator on Hugging Face. image

Good question...

I have not used them before, but this sounds suspicious. Maybe it's because they're binary? I suppose we could go back to the paper and check how they generated these but that could involve a lot of work. Otherwise, I'm inclined to skip. Don't want to be responsible for making your machine unsafe šŸ˜†

nina-msft commented 1 month ago

Marking this with good first issue. The remaining tasks of:

may be good first issues to tackle.

nina-msft commented 1 month ago

@visirion07 - are you still planning on taking a look at Textfooler and Textbugger? šŸ˜„

visirion07 commented 1 month ago

Yes @nina-msft. Sorry got held up in some other work. Taking this up as a high priority. WIll post an ETA soon