Closed tsujimic closed 1 week ago
your prompt is raising a content safety alert because the content safety filter consider you have violence in your prompt.
content_filter_results={'hate': {'filtered': False, 'severity': 'safe'}, 'self_harm': {'filtered': False, 'severity': 'safe'}, 'sexual': {'filtered': False, 'severity': 'safe'}, 'violence': {'filtered': True, 'severity': 'medium'}})]
So it produce an error 400
I understand that my prompting is causing a content filter error and a 400 bad request error. If I get this error, and also get "During handling of the above exception, another exception occurred:", then my caller's Python code is not able to handle the error
try:
response = await acompletion(
model=model,
messages=messages,
stream=stream,
)
async def data_generator():
async for chunk in response:
yield chunk.model_dump_json()
return StreamingResponse(data_generator(), media_type="application/json-lines")
except Exception as e:
# When "During handling of the above exception, another exception occurred:",
# the error cannot be handled.
return JSONResponse(content=getattr(e, "message", "error occurred"))
@tsujimic what is the error raised by your python caller code?
the exception raised is the same - litellm.exceptions.BadRequestError
which should be of type Exception (it inherits from the openai BadRequestError
)
unable to repro - this works for me
from litellm import acompletion, APIError, completion
import asyncio, httpx, traceback
import litellm
async def try_acompletion_error():
model = "azure/gpt-3.5-turbo"
request = httpx.Request(
method="POST",
url="https://azure.com/"
)
exception_to_raise = APIError(
status_code=400,
message="invalid_request_error",
llm_provider="azure",
request=request,
model="gpt-35-turbo",
)
setattr(exception_to_raise, "response", httpx.Response(status_code=400, request=request))
try:
response = completion(
model=model,
messages=[{"role": "user", "content": "Hey"}],
api_version="2023-06-12",
# stream=True,
mock_response=exception_to_raise,
num_retries=0
)
chunks = []
async def data_generator():
async for chunk in response:
chunks.append(chunk)
except Exception as e:
# When "During handling of the above exception, another exception occurred:",
# the error cannot be handled.
print("ERROR CAUGHT! - {}".format(str(e)))
asyncio.run(try_acompletion_error())
output:
(base) krrishdholakia@Krrishs-MacBook-Air temp_py_folder % python3 linting_tests.py
Give Feedback / Get Help: https://github.com/BerriAI/litellm/issues/new
LiteLLM.Info: If you need to debug this error, use `litellm.set_verbose=True'.
ERROR CAUGHT! - litellm.BadRequestError: AzureException BadRequestError - litellm.APIError: invalid_request_error
(base) krrishdholakia@Krrishs-MacBook-Air temp_py_folder %
What happened?
Exception handling using litellm python sdk fails due to "During handling of the above exception, another exception occurred:". When a content filter error occurs in Azure OpenAI Service, it sometimes occurs in exception handling.
Relevant log output
Twitter / LinkedIn details
No response