Azure / PyRIT

The Python Risk Identification Tool for generative AI (PyRIT) is an open access automation framework to empower security professionals and machine learning engineers to proactively find risks in their generative AI systems.
MIT License
1.76k stars 330 forks source link

Add fetch function for SecLists AI LLM Bias Testing datasets #267

Closed romanlutz closed 1 month ago

romanlutz commented 3 months ago

Is your feature request related to a problem? Please describe.

Link: https://github.com/danielmiessler/SecLists/tree/master/Ai/LLM_Testing/Bias_Testing

There are three files which we can load and convert to a PromptDataset as prompts. Note that some of them have placeholders for Country, Region, Nationality, Gender, Skin-Color which we'd need to be mindful of. Those are basically PromptTemplates where we need to plug in a parameter.

Describe the solution you'd like

A fetch function under pyrit.datasets similar to what's being added as part of #254 for another dataset.

Describe alternatives you've considered, if relevant

-

Additional context

PyRIT will have a datasets module soon. Currently, it's just a collection of data files.

KutalVolkan commented 3 months ago

Hello @romanlutz,

I would be interested in working on the SecLists AI LLM Bias Testing datasets task after I complete the many-shot-jailbreak task. What are your thoughts on this?

Best regards, Volkan

romanlutz commented 3 months ago

Great idea!