Stability-AI / rest-api-support

Stability REST API examples, issues, and discussions | https://api.stability.ai
108 stars 21 forks source link

No way to recover from an invalid prompt #1

Open juliendorra opened 1 year ago

juliendorra commented 1 year ago

Hi! There's an issue with Invalid Prompts in the API.

The way it is handled makes the API likely to error out an app without any automated way to fix the error.

I'm sending prompts through the API that are not manually written but are combination of sources. I cannot fully control the sources, as they depend on end user settings. My users don't manually write the prompts either.

Certain words in the combined prompt (even sometimes quite tame words, or polysemic ones that have a common safe meaning, but that's another, much wider issue) are triggering a 400 response "Invalid prompts detected"

This is the same behavior as the popup in the UI.

For a manual UI, it's an OK behavior, the human can try and guess the word. But for a machine-to-machine API, there's several issues:

  1. As far as I know, we don't have a list of these words, to filter them beforehand.
  2. The API is not responding with the problematic word either, so the app at the other end of the API cannot act on this 400 (for example by removing the word and sending another request)
  3. The API is not offering an option to automatically filter out any supposedly NSFW word

(4. This is in addition to the previous issue that banned words in a negative prompt are also triggering an Invalid prompts detected error, which of course makes no sense)

My own preference as a developer would be for 2 and 3 to be available.

I know that if there was a 'auto filter' switch (3.), I would turn it on now and don't think about it anymore! Then maybe later I would try to use 2. to automatically rewrite Invalid Prompts (tamer synonyms, or maybe an ML solution)

I would love the feedback of the team and other users of the API on this

todd-elvers commented 1 year ago

Thanks for your feedback @juliendorra! Apologies for how long it took to get back to you, we value the community's feedback and try to respond ASAP.

Let me start by saying that I agree the current implementation of NSFW filtering leaves much to be desired.

You're right we do not currently publish a list of banned words. We've added a card to our backlog regarding exposing the problematic word in the response and/or automatically filtering out the NSFW term. Once the upstream work for that has been completed we can add it to the project.

This work will need to be prioritized against other work, so please be patient while we address this issue.

juliendorra commented 1 year ago

Hi, an example, this is from a series of prompts that tell a whole story about a mum fixing toys for her kids…

The woman standing up, holding the robot toy in her hand. She is surrounded by two kids, a boy and a girl, both with big smiles on their faces. The kitchen table is now tidy, with the soldering iron and the circuit board off to the side. The woman is slim and has short, light-brown hair. She is wearing a white t-shirt, blue jeans, and glasses. The boy is wearing a blue t-shirt and blue shorts. The girl is wearing a yellow dress.

This is doesn’t work either in the API or on studio, Invalid prompts detected

4 out of 5 images for this story had the issue. Even as an human, I have a hard time understanding what to remove…

Any news on this? It's really blocking, as it can randomly block totally innocuous ideas like this and introduce random, uncontrollable and unfixable errors in the API, and thus in our apps 🙁

[edit: after split-testing, the only word that block the prompt is… kids. remove just this word and it works. Doesn't make a lot of sense… but at least it would be useful to get the word back, yes I know that would expose the list to brute-force finding, but…]

rajbala commented 1 year ago

I am having this same problem. These prompts are generated programmatically and I am using the Dreamstudio API.

Not sure what about these prompts triggers the invalid_prompts error:

Clouds, umbrella, and shield representing protection against failure Broken cloud symbolizing cloud provider failure Risk assessment matrix or scale to showcase different levels of risk A solid foundation or base, possibly made of stone, supporting a structure Interconnected cloud symbols, representing different providers working together A safety vault or secure storage box, symbolizing secure data backup A radar screen or monitoring dashboard displaying various metrics and alerts A checklist or progress bar showing completion of tasks or updates A group of people participating in a training session or workshop A lighthouse or beacon, symbolizing guidance and protection against potential threats

rajbala commented 1 year ago

It happened again within the same day by different users of my service.

It seems that the "Invalid prompts detected" exception will be raised based upon simply checking keywords in the prompt. I'm assuming that the prompts that refer to children generated the error.

Colorful books stacked or arranged in a whimsical manner A book with magical sparkles coming from its pages A bookshelf filled with a variety of children's books Illustrations of various diverse characters from children's books A winding road or path representing a captivating plot A beautiful, detailed illustration from a children's book A child touching a book with interactive elements, such as pop-up features or textures A calendar with designated reading times marked An animated storyteller reading a book to an engaged group of children A group of children excitedly gathered around a storyteller or a stack of books

rajbala commented 1 year ago

Invalid prompts detected:

evaluation, assessment, feedback, rating, teacher, management

"teacher" is a prohibited word!

I have to be candid: this is incredibly frustrating.

evaluation, assessment, feedback, rating, management

rajbala commented 1 year ago

This is an invalid prompt. LOL. Just maddening.

An airplane ascending into the sky, symbolizing the successful execution of the 30-60-90 day sales plan

Arasiia commented 1 year ago

Hello, I also meet with my users this same blocking of news on the evolution of these blockages?

rahul-littlegreats commented 1 year ago

Same issue, totally kid friendly prompts getting this error.

blistick commented 1 year ago

Yes, same for me. Very innocuous prompts are returning error 400 from the API, and it's happening frequently.

If it's not addressed ASAP I'll need to switch to another provider for my diffusion. Honestly, an AI-based company can't implement a more sophisticated filtering model? Really?

andreasjhagen commented 1 year ago

Yeah the invalid prompt thing is really irritating. It's not clearly communicated which words trigger are not allowed.

I also use ChatGPT in order to generate prompts it irregularly throws errors. I'm also thinking about moving to another AI image provider at this point if this isn't fixed.

After all, options are plenty out there at this point

turbobuilt commented 1 year ago

Basically anything that says "kid" is banned. Even "kids wearing clothing". I don't understand how the filter blocks this prompt out "a mother with 3 kids". I think the filter needs some positive examples with kids, not just negative ones. I'd be happy to help if it was open source because I really like the api!

simaofreitas commented 1 year ago

Quite annoying. Getting a lot of images not possible to generate based on story summaries. Any progress here? How can we avoid this?

rajbala commented 1 year ago

I decided to build an offering in this space instead of trying to wrestle with this issue. Check it out if you so wish: Diffute

The service currently supports inferencing and training of Stable Diffusion models including Stable Diffusion XL 1.0.

Feel free to ping me if you need capabilities that are not present today. I will happily add them.

csarigoz commented 1 year ago

Having the same problem from time to time. For example, this prompt got the "invalid prompt" error:

Taylor, Andrea, USTA National Tennis Center - A stunning, soft-colored artwork of Taylor, a brave child, playing an exhilarating tennis match against formidable opponents.

Do you know what could be wrong with this prompt? I guess it's because of the word "child". And do you know if there's a list of keywords that should be avoided in prompts?

DarrenChenOL commented 1 year ago

Would you be able to publish the banned words so we can ask our prompt generator to avoid the them?

sharma0611 commented 1 year ago

+1

edgardz commented 11 months ago

Same here. Looking for alternative solutions because of this.

RobinDenaux commented 10 months ago

Boy is banned too :

A young boy and a friendly giant mech exploring a magical forest together.

My description are generated by chatgpt, which I instruct to be very SFW. This is going to be a major problem for me too.

turbobuilt commented 10 months ago

You should just have a nude filter like this to fix instead of the prompt

https://github.com/nipunru/nsfw-detector-android