OPTML-Group / Diffusion-MU-Attack

The official implementation of ECCV'24 paper "To Generate or Not? Safety-Driven Unlearned Diffusion Models Are Still Easy To Generate Unsafe Images ... For Now". This work introduces one fast and effective attack method to evaluate the harmful-content generation ability of safety-driven unlearned diffusion models.
MIT License
60 stars 3 forks source link

nudity evaluation #12

Open LucidStephen opened 1 month ago

LucidStephen commented 1 month ago

Thank you for open-sourcing such meaningful work. I've noticed that UnlearnDiffAtk is increasingly being used as an evaluation tool in the concept erasure field. I'm wondering if the author could create official shell scripts for nudity and other experiments to enable one-click evaluation.

damon-demon commented 1 month ago

Thank you for your kind words and support! We're glad to hear that UnlearnDiffAtk is proving useful in the concept erasure field.

We indeed have plans to package our method into a Python package to make it more user-friendly and accessible. Once we roll out the updates, I’d be happy to keep you posted and share the latest developments with you. Stay tuned, and feel free to reach out with any further feedback or suggestions!