BerriAI / litellm

Call all LLM APIs using the OpenAI format. Use Bedrock, Azure, OpenAI, Cohere, Anthropic, Ollama, Sagemaker, HuggingFace, Replicate (100+ LLMs)
https://docs.litellm.ai/docs/
Other
10.16k stars 1.13k forks source link

[Bug]: DALL-E connection error #4454

Closed Clad3815 closed 1 day ago

Clad3815 commented 3 days ago

What happened?

When I make a request to DALL E I have this problem.

config.yaml

model_list:
  # OpenAI models
  - model_name: "gpt-3.5-turbo"
    litellm_params:
      model: openai/gpt-3.5-turbo
  - model_name: "gpt-4o"                     
    litellm_params:
      model: openai/gpt-4o
  - model_name: "gpt-4-turbo"                
    litellm_params:
      model: openai/gpt-4-turbo
  - model_name: tts-1
    litellm_params:
      model: openai/tts-1
  - model_name: tts-1-hd
    litellm_params:
      model: openai/tts-1-hd
  - model_name: dall-e-2
    litellm_params:
      model: openai/dall-e-2
    model_info:
      mode: image_generation
  - model_name: dall-e-3
    litellm_params:
      model: openai/dall-e-3
    model_info:
      mode: image_generation
  - model_name: text-moderation-stable
    litellm_params:
      model: openai/text-moderation-stable
  - model_name: text-moderation-latest
    litellm_params:
      model: openai/text-moderation-latest
  - model_name: whisper-1
    litellm_params:
      model: openai/whisper-1
    model_info:
      mode: audio_transcription

API Call:

image = await openai.images.generate(
            {
                model: "dall-e-3",
                prompt: rewrittenPrompt.dalle_prompt,
                n: 1,
                size: '1792x1024',
                quality: 'hd'
            }
        );

The error returned to the OpenAI client

{
  status: 500,
  headers: {
    'content-length': '143',
    'content-type': 'application/json',
    date: 'Fri, 28 Jun 2024 10:01:00 GMT',
    server: 'uvicorn'
  },
  request_id: undefined,
  error: {
    message: 'litellm.APIConnectionError: APIConnectionError: OpenAIException - Connection error.',
    type: null,
    param: null,
    code: 500
  },
  code: 500,
  param: null,
  type: null
}

Relevant log output

litellm-1  | 10:01:02 - LiteLLM:DEBUG: main.py:4637 - initial list of deployments: [{'model_name': 'dall-e-3', 'litellm_params': {'model': 'openai/dall-e-3'}, 'model_info': {'id': '800eb15eb81d82eeb730f3754e4e17f6f614f910a04eed712766ab6c92021cbd', 'db_model': False, 'mode': 'image_generation'}}]
litellm-1  | 10:01:02 - LiteLLM:DEBUG: caching.py:29 - async get cache: cache key: 10-01:cooldown_models; local_only: False
litellm-1  | 10:01:02 - LiteLLM:DEBUG: caching.py:29 - in_memory_result: ['800eb15eb81d82eeb730f3754e4e17f6f614f910a04eed712766ab6c92021cbd']
litellm-1  | 10:01:02 - LiteLLM:DEBUG: caching.py:29 - get cache: cache result: ['800eb15eb81d82eeb730f3754e4e17f6f614f910a04eed712766ab6c92021cbd']
litellm-1  | 10:01:02 - LiteLLM Router:DEBUG: router.py:3078 - retrieve cooldown models: ['800eb15eb81d82eeb730f3754e4e17f6f614f910a04eed712766ab6c92021cbd']
litellm-1  | 10:01:02 - LiteLLM Router:DEBUG: router.py:4658 - async cooldown deployments: ['800eb15eb81d82eeb730f3754e4e17f6f614f910a04eed712766ab6c92021cbd']
litellm-1  | 10:01:02 - LiteLLM:DEBUG: main.py:4637 - initial list of deployments: [{'model_name': 'dall-e-3', 'litellm_params': {'model': 'openai/dall-e-3'}, 'model_info': {'id': '800eb15eb81d82eeb730f3754e4e17f6f614f910a04eed712766ab6c92021cbd', 'db_model': False, 'mode': 'image_generation'}}]
litellm-1  | 10:01:02 - LiteLLM:DEBUG: caching.py:29 - async get cache: cache key: 10-01:cooldown_models; local_only: False
litellm-1  | 10:01:02 - LiteLLM:DEBUG: caching.py:29 - in_memory_result: ['800eb15eb81d82eeb730f3754e4e17f6f614f910a04eed712766ab6c92021cbd']
litellm-1  | 10:01:02 - LiteLLM:DEBUG: caching.py:29 - get cache: cache result: ['800eb15eb81d82eeb730f3754e4e17f6f614f910a04eed712766ab6c92021cbd']
litellm-1  | 10:01:02 - LiteLLM Router:DEBUG: router.py:3078 - retrieve cooldown models: ['800eb15eb81d82eeb730f3754e4e17f6f614f910a04eed712766ab6c92021cbd']
litellm-1  | 10:01:03 - LiteLLM Router:DEBUG: router.py:2164 - TracebackTraceback (most recent call last):
litellm-1  |   File "/usr/local/lib/python3.11/site-packages/openai/_base_client.py", line 1522, in _request
litellm-1  |     response = await self._client.send(
litellm-1  |                ^^^^^^^^^^^^^^^^^^^^^^^^
litellm-1  |   File "/usr/local/lib/python3.11/site-packages/httpx/_client.py", line 1661, in send
litellm-1  |     response = await self._send_handling_auth(
litellm-1  |                ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
litellm-1  |   File "/usr/local/lib/python3.11/site-packages/httpx/_client.py", line 1689, in _send_handling_auth
litellm-1  |     response = await self._send_handling_redirects(
litellm-1  |                ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
litellm-1  |   File "/usr/local/lib/python3.11/site-packages/httpx/_client.py", line 1726, in _send_handling_redirects
litellm-1  |     response = await self._send_single_request(request)
litellm-1  |                ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
litellm-1  |   File "/usr/local/lib/python3.11/site-packages/httpx/_client.py", line 1763, in _send_single_request
litellm-1  |     response = await transport.handle_async_request(request)
litellm-1  |                ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
litellm-1  |   File "/usr/local/lib/python3.11/site-packages/litellm/llms/custom_httpx/azure_dall_e_2.py", line 10, in handle_async_request
litellm-1  |     if "images/generations" in request.url.path and request.url.params[
litellm-1  |                                                     ^^^^^^^^^^^^^^^^^^^
litellm-1  |   File "/usr/local/lib/python3.11/site-packages/httpx/_urls.py", line 599, in __getitem__
litellm-1  |     return self._dict[key][0]
litellm-1  |            ~~~~~~~~~~^^^^^
litellm-1  | KeyError: 'api-version'
litellm-1  | 
litellm-1  | The above exception was the direct cause of the following exception:
litellm-1  | 
litellm-1  | Traceback (most recent call last):
litellm-1  |   File "/usr/local/lib/python3.11/site-packages/litellm/main.py", line 3947, in aimage_generation
litellm-1  |     response = await init_response
litellm-1  |                ^^^^^^^^^^^^^^^^^^^
litellm-1  |   File "/usr/local/lib/python3.11/site-packages/litellm/llms/openai.py", line 1148, in aimage_generation
litellm-1  |     raise e
litellm-1  |   File "/usr/local/lib/python3.11/site-packages/litellm/llms/openai.py", line 1131, in aimage_generation
litellm-1  |     response = await openai_aclient.images.generate(**data, timeout=timeout)  # type: ignore
litellm-1  |                ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
litellm-1  |   File "/usr/local/lib/python3.11/site-packages/openai/resources/images.py", line 504, in generate
litellm-1  |     return await self._post(
litellm-1  |            ^^^^^^^^^^^^^^^^^
litellm-1  |   File "/usr/local/lib/python3.11/site-packages/openai/_base_client.py", line 1790, in post
litellm-1  |     return await self.request(cast_to, opts, stream=stream, stream_cls=stream_cls)
litellm-1  |            ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
litellm-1  |   File "/usr/local/lib/python3.11/site-packages/openai/_base_client.py", line 1493, in request
litellm-1  |     return await self._request(
litellm-1  |            ^^^^^^^^^^^^^^^^^^^^
litellm-1  |   File "/usr/local/lib/python3.11/site-packages/openai/_base_client.py", line 1556, in _request
litellm-1  |     raise APIConnectionError(request=request) from err
litellm-1  | openai.APIConnectionError: Connection error.

Twitter / LinkedIn details

No response

ishaan-jaff commented 1 day ago

Able to repro this when my config has

model_list:
  - model_name: dall-e-3
    litellm_params:
      model: openai/dall-e-3
      api_key: os.environ/OPENAI_API_KEY
    model_info:
      mode: image_generation

but it works fine when my config is

model_list:
  - model_name: dall-e-3
    litellm_params:
      model: dall-e-3
      api_key: os.environ/OPENAI_API_KEY
    model_info:
      mode: image_generation