axodox / unpaint

A simple Windows / Xbox app for generating AI images with Stable Diffusion.
Other
263 stars 11 forks source link

[Question] Safety #25

Closed JohhoA closed 1 year ago

JohhoA commented 1 year ago

any ways to disable the safety check? cant seem to find how (yes im shameless)

axodox commented 1 year ago

I knew this question would come up soon. This is by design: the toggles are only there in debug builds, to test the safety mechanisms themselves.

To explain why:

Not so long ago when I worked on a bug with prompt parsing, I tested the app by typing random letters with some spaces, getting some non-descript output, like landscapes, buildings, people walking on streets, abstract images and such. At the time I was following a Microsoft very primitive Stable Diffusion with C# example, which had no safety checker at the time, and neither did I.

It looked like I fixed the bug, so I typed in some more random prompts, but then.... I managed to type in a string of around 15 random letters, it did not look like anything meaningful, but it triggered damning results. I will not describe the content here. But I will add it was not just an unconditional output, it was very specific. I tested this cursed string with 5 networks, 4 community based and the original SD1.5 from stability AI, except for the original all the other - otherwise wholesome looking - community models would generate pretty much the same results.

Long story short, I was glad I found it out this way, and got a free and strong lesson about AI safety. I immediately put my ControlNet plans on-hold, and started adding safety features. Once I implemented them and verified that they work, I would have them on all the time.

But another take: think you made the app, and you read the news that a highschooler committed suicide due to explicit fake images shared by her classmates, and it would mentioned it was your app enabling that. I just don't wish to be in that situation.

So that is why the safety feature will remain ON in this repo, at least until I find a better solution to the above problems. I know some will not like this response, but it is quite different when you are the author of an app, compared to using it.

Aptronymist commented 1 year ago

I can appreciate your concern, and your perspective, it's just that, at least to me, (and no offense intended at all by this) it doesn't seem to be entirely rational considering the current landscape of AI generated art tools. It's akin to a blacksmith refusing to make a knife because it might get used to kill a human rather than as a tool.

There are plenty of guides on how to do your own deepfake images of people in that manner, and a plethora of tools that would do so far easier than yours, since you don't have inpainting or controlnet.

There is functioning webgpu based in-browser image generation, and plenty of much more full featured diffusion software packages that run on any GPU or CPU, and any highschooler with $10 a month, or a google account in the case of colab use, is free to generate anything they want with a few clicks. This becomes easier every day.

If they want to do it, they'll do it.

For you to take the responsibility upon yourself for what other people can and will do, and if they don't do it with your tool it will be with someone else's, seems like a recipe for you just giving up entirely.

Personally I don't need this app, I can run on amd and nvidia, and a decent cpu if need be, on any number of sdwebui forks or numerous others, but I was excited to see diffusion finally come to Windows natively and in C++ to boot, something that I could recommend to friends or family to play with that might be curious, so I want you to succeed. I want everyone to have AI art at their fingertips. Every child with a tablet or phone, every adult bored on the toilet.

As I understand it, it's not exactly hard to just bypass your safety code entirely and build the project without it. I don't think a computer-literate kid with chatgpt access and the ability to download visual studio will struggle long. Nor will it be long for a non-protected fork to pop up, this is github after all.

logikstate commented 1 year ago

this repo has a NSFW filter in it https://github.com/FaceONNX/NsfwONNX

axodox commented 1 year ago

It's akin to a blacksmith refusing to make a knife because it might get used to kill a human rather than as a tool.

No its not, it is more like a blacksmith refusing to make a knife, which have a dangerous handle design, and prone to cut your own hand. Unfortunately, these neural networks can output unsafe images even when not prompted like that at all. I have seen it multiple times.

If they want to do it, they'll do it.

I obviously cannot help that. In my opinion content filtering AI will get much better, and this problem will disappear not so long from now. That is only actually problematic images will be flagged. I would rather start with a more restrictive app, than have it be reported and taken down. As they say: better be safe, than sorry.

that I could recommend to friends or family to play with that might be curious, so I want you to succeed

I am quite sure your family would not appreciate it if the app started to output NSFW output on random prompts. I mean I do not know you or your family, but I am quite sure my mother would not approve.

I want everyone to have AI art at their fingertips. Every child with a tablet or phone"

And for that we need these apps to be safe and be in the app stores, and app stores are pretty strict about this. I will still need to make a number changes for Unpaint to have a chance to be accepted and not suspended a week later.

As I understand it, it's not exactly hard to just bypass your safety code entirely and build the project without it. I don't think a computer-literate kid with chatgpt access and the ability to download visual studio will struggle long. Nor will it be long for a non-protected fork to pop up, this is github after all.

Sure if you are dev it is easy, but most people are not like that. Besides then you are asking for trouble: its like having a toaster, if it shocks the user during normal use, the manufacturer is responsible, if the user disassembles it while plugged in and dies then it is a user fault.

And as you said there are already better tools out there. So no need to fork my app, they can just use the apps catering to those needs already. I target a different market, not all of AI is about porn.

ke1ne commented 1 year ago

Guys, it's an author's strategy. If he doesn't want to disable safety checker, you have no power or right to force. You have a power to use another tool or write your own solution. Accept and live with it. P.S. @axodox However, maybe some better detection algorithms are exist, to decrease false alerts.

axodox commented 1 year ago

@ke1ne I am open for suggestions :)

Aptronymist commented 1 year ago

It's akin to a blacksmith refusing to make a knife because it might get used to kill a human rather than as a tool.

No its not, it is more like a blacksmith refusing to make a knife, which have a dangerous handle design, and prone to cut your own hand. Unfortunately, these neural networks can output unsafe images even when not prompted like that at all. I have seen it multiple times.

I wish I was that naive.

"Dangerous handle design", as if 99.8% of the world didn't have one set of things between their legs or the other. Where exactly is the danger?

If they want to do it, they'll do it.

I obviously cannot help that. In my opinion content filtering AI will get much better, and this problem will disappear not so long from now. That is only actually problematic images will be flagged. I would rather start with a more restrictive app, than have it be reported and taken down. As they say: better be safe, than sorry.

that I could recommend to friends or family to play with that might be curious, so I want you to succeed

I am quite sure your family would not appreciate it if the app started to output NSFW output on random prompts. I mean I do not know you or your family, but I am quite sure my mother would not approve.

I'm quite sure both of our mothers are mature enough to decide if they want to turn off an NSFW filter or not, don't you? How do you think you got here exactly? Your dad tripped and fell? Repeatedly? He's probably still tripping and falling, huh?

I love how you went out of your way to create a 'random prompt' and got something that made you clutch your pearls, and you're assuming that will happen to anyone else, and that they won't like it. Show me someone that wouldn't, given the chance, want to do precisely that sooner or later, and then throw up their hands in frustration because it isn't allowed, and then go use another app instead.

I want everyone to have AI art at their fingertips. Every child with a tablet or phone"

And for that we need these apps to be safe and be in the app stores, and app stores are pretty strict about this. I will still need to make a number changes for Unpaint to have a chance to be accepted and not suspended a week later.

Again, let me point out, this isn't the MS app store. Would you like me to direct you to something that could actually censor the photos for you? As in not prevent a generation, or heavy handedly filter it, but censor out any bits that might be objectionable after generation, and before displaying?

As I understand it, it's not exactly hard to just bypass your safety code entirely and build the project without it. I don't think a computer-literate kid with chatgpt access and the ability to download visual studio will struggle long. Nor will it be long for a non-protected fork to pop up, this is github after all.

Sure if you are dev it is easy, but most people are not like that. Besides then you are asking for trouble: its like having a toaster, if it shocks the user during normal use, the manufacturer is responsible, if the user disassembles it while plugged in and dies then it is a user fault.

WTF are you even talking about? Did you accidentally see a breast or something as an infant? You're making comparisons of things you literally won't even say to DYING. I'm starting to think you should probably try selling this elsewhere if you feel that the MS app store is so restrictive, and your conscience can't handle someone accidentally seeing a nipple.

And as you said there are already better tools out there. So no need to fork my app, they can just use the apps catering to those needs already. I target a different market, not all of AI is about porn.

Nice implication there. What market is it you target exactly then? 6 year olds? 8 year olds? 10? Anybody older than that and they're going to get curious and want to see 'naughty bits', so I have to wonder who you're targeting and precisely how naive you really are. Do you think adults are going to buy this, when they can buy something that isn't forcefully censored? I've seen a photo of my grandmother in her bra that my grandfather took one year, after the extended family left, post-xmas celebrations. So you're not likely to be able to sell it to anybody of any age range, if even the elderly get frisky with the polaroid camera.

Maybe you should consider just making it a SD2.0 model app? Or, make your own models based on that, or something else that doesn't even contain that material, by design at the very core, where the scary things are hiding. After all, you already explicitly limited the available models that can be used. You can sell one version that has a few custom models with nothing nsfw in them, and one that the other 80% of people might want to buy.

axodox commented 1 year ago

@Aptronymist Sorry, but I have made my case. I will now close this thread.

Aptronymist commented 1 year ago

@Aptronymist Sorry, but I have made my case. I will now close this thread.

Lol, you misunderstand. I uninstalled your app right after trying it due to everything I pointed out. You seem to be going about this backwards is all.

Develop an app THEN add the restrictions and whatnot to fit the platform for publication.

Instead you have people installing certificates and crap from github while you figure out how to fix the obvious and predictable complaints anyone should have seen coming. I made it clear that it made zero difference to me, I was simply replying.