AUTOMATIC1111 / stable-diffusion-webui-nsfw-censor

stable-diffusion-webui-nsfw-censor
116 stars 47 forks source link

Inaccurate test results, 80% of the time will be black image #7

Open jaysunxiao opened 1 year ago

jaysunxiao commented 1 year ago

Inaccurate test results, 80% of the time will be black image

2575044704 commented 1 year ago

Same here

2575044704 commented 1 year ago

same here, whatever models I use

3Diva commented 1 year ago

Yeah, I love the concept of this but it's FAR too sensitive and blocks quite a few images that are completely fine. I'm watching the images generate and they're fully clothed and fine but then at the last second the filter kicks in and turns the images completely black. It's blocking images that are nowhere near NSFW territory. While I Love the idea of this extension, it's far too sensitive and appears to block images that are completely normal and fine.

I have an older and slower computer and so it takes a couple of minutes for each image to generate - it's hugely frustrating to watch an image being generated and it's looking good and I'm excited to have it finish only to at the last second get turned into nothing but black.

While I would really love this feature if it wasn't so sensitive, the fact that it's blocking out images that seem to be fully clothed and fine is a bit frustrating, so sadly I'm going to have to disable this extension. I hope that it's able to be fine tuned a bit more and maybe updated to become less sensitive. :)

jaysunxiao commented 1 year ago

really hope to support it

kostia-official commented 1 year ago

It's possible to make adjustments to it's sensitivity in this file: https://github.com/huggingface/diffusers/blob/main/src/diffusers/pipelines/stable_diffusion/safety_checker.py#L65

With some monkey patching of this file you can change adjustment value. But it should be really low, adjustment = -0.008 works better on images that I tested.

MasterDenis commented 1 year ago

It's possible to make adjustments to it's sensitivity in this file: https://github.com/huggingface/diffusers/blob/main/src/diffusers/pipelines/stable_diffusion/safety_checker.py#L65

With some monkey patching of this file you can change adjustment value. But it should be really low, adjustment = -0.008 works better on images that I tested.

Hey how do we get this out of the box? I want to implement this better adjustment in my project, where do I find this file in my installation of A1111 in order to edit the value?