microsoft / promptflow

Build high-quality LLM apps - from prototyping, testing to production deployment and monitoring.
https://microsoft.github.io/promptflow/
MIT License
9.42k stars 855 forks source link

[Feature Request] LLM error handling #3716

Closed dpolac closed 2 weeks ago

dpolac commented 1 month ago

Is your feature request related to a problem? Please describe. I am making chat flow in promptflow that I intend to deploy as an HTTP endpoint. In case the user of my chat inputs a certain content (related to hatered, racism, sexuality etc.), the GPT-4o API endpoint returns HTTP500 error because of message being filtered by built-in GPT filtering. This cause the entire flow to end in error and user of my API gets HTTP500 without indication what went wrong.

There are few reasons why this is undesirable:

  1. Front-end cannot decide between "Bot won't answer this question" or "Technical issue, try again later" error messages. Because there is no difference from API perspective.
  2. I cannot implement graceful failure, ie. returning HTTP 200 with answer="I won't answer this question." in case of GPT filtering.
  3. Runs that ended with error are marked in red and aren't taken into consideration in batch evaluations. So it's impossible to build an evaluation flow that verifies if bad messages are filtered.

Describe the solution you'd like I'd like the LLM tool (and possibly all GPT tools) to have additional optional parameter "allowed_http_codes" that lets me specify which codes other that 2xx are allowed. This way, I can handle them in subsequent Python tool. allowed_http_codes: [500, 404]

The new parameter would show under "Advanced" in VS Code plugin and Azure AI Studio.

Describe alternatives you've considered Only alternative I found is using Python block instead of LLM block to wrap the GPT call in try-expect block. However, this approach means that I cannot use variants.

github-actions[bot] commented 3 weeks ago

Hi, we're sending this friendly reminder because we haven't heard back from you in 30 days. We need more information about this issue to help address it. Please be sure to give us your input. If we don't hear back from you within 7 days of this comment, the issue will be automatically closed. Thank you!