Closed dpolac closed 1 month ago
Hi, we're sending this friendly reminder because we haven't heard back from you in 30 days. We need more information about this issue to help address it. Please be sure to give us your input. If we don't hear back from you within 7 days of this comment, the issue will be automatically closed. Thank you!
Is your feature request related to a problem? Please describe. I am making chat flow in promptflow that I intend to deploy as an HTTP endpoint. In case the user of my chat inputs a certain content (related to hatered, racism, sexuality etc.), the GPT-4o API endpoint returns HTTP500 error because of message being filtered by built-in GPT filtering. This cause the entire flow to end in error and user of my API gets HTTP500 without indication what went wrong.
There are few reasons why this is undesirable:
Describe the solution you'd like I'd like the LLM tool (and possibly all GPT tools) to have additional optional parameter "allowed_http_codes" that lets me specify which codes other that 2xx are allowed. This way, I can handle them in subsequent Python tool.
allowed_http_codes: [500, 404]
The new parameter would show under "Advanced" in VS Code plugin and Azure AI Studio.
Describe alternatives you've considered Only alternative I found is using Python block instead of LLM block to wrap the GPT call in try-expect block. However, this approach means that I cannot use variants.