BerriAI / litellm

Python SDK, Proxy Server (LLM Gateway) to call 100+ LLM APIs in OpenAI format - [Bedrock, Azure, OpenAI, VertexAI, Cohere, Anthropic, Sagemaker, HuggingFace, Replicate, Groq]
https://docs.litellm.ai/docs/
Other
14.17k stars 1.68k forks source link

[Bug]: Custom API Server for image generation not working .... failed in get_llm_provider #6496

Closed mickymics closed 3 weeks ago

mickymics commented 3 weeks ago

What happened?

I am trying to invoke custom API Server for image generation LLM. Model stabilityai/stable-diffusion-xl-base-1.0 has been deployed using kserve and expose an URL:

NAME URL READY PREV LATEST PREVROLLEDOUTREVISION LATESTREADYREVISION AGE text2image http://text2image.fm.aac5.amd.com True 100 text2image-predictor-00001 24d

I have defined custom_handler.py

cat custom_handler.py from typing import Optional, Union, Any import litellm import httpx from litellm import CustomLLM from litellm.types.utils import ImageResponse, ImageObject

class ImageGenerationCustomLLM(CustomLLM): async def aimage_generation( self, model: str, prompt: str, api_base: Optional[str], api_key: Optional[str], model_response: ImageResponse, optional_params: dict, logging_obj: Any, timeout: Optional[Union[float, httpx.Timeout]] = None, client: Optional[httpx.AsyncClient] = None, ) -> ImageResponse: try: payload = { "model": model, "prompt": prompt, "api_base": api_base, "api_key": api_key, "num_inference_steps": optional_params.get("num_inference_steps", "20"), "guidance_scale": optional_params.get("guidance_scale", "9.5"), "height": optional_params.get("height", "512"), "width": optional_params.get("width", "512") } response = await litellm.aimage_generation(**payload) return response except Exception as e: raise Exception(f"Error generating image: {str(e)}")

And this is my proxy_config:

apiVersion: v1 data: proxy_config.yaml: | model_list:

I am trying to invoke using curl:

curl -v -H "Content-Type: application/json" http://litellm-service.litellm.svc.cluster.local:4000/images/generations -d@./test.json

test.json contains:

{ "model": "my-custom-model", "prompt": "An astronaut riding a horse on the beach", "num_inference_steps": "20", "guidance_scale": "9.5", "height": "512", "width": "512" }


What is the right way to invoke my llm model deployed in kserve and I need to pass the following payload:

{ "model": "my-custom-model", "prompt": "An astronaut riding a horse on the beach", "num_inference_steps": "20", "guidance_scale": "9.5", "height": "512", "width": "512" }

Relevant log output

**litellm-proxy throwing error:**

proxy_server.py:4013 - litellm.proxy.proxy_server.image_generation(): Exception occured - litellm.APIConnectionError: Error generating image: litellm.BadRequestError: LLM Provider NOT provided. Pass in the LLM provider you are trying to call. You passed model=my-model
 Pass model as E.g. For 'Huggingface' inference endpoints pass in `completion(model='huggingface/starcoder',..)` Learn more: https://docs.litellm.ai/docs/providers
Traceback (most recent call last):
  File "/app/custom_handler.py", line 31, in aimage_generation
    response = await litellm.aimage_generation(**payload)
               ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/usr/local/lib/python3.11/site-packages/litellm/utils.py", line 1227, in wrapper_async
    raise e
  File "/usr/local/lib/python3.11/site-packages/litellm/utils.py", line 1083, in wrapper_async
    result = await original_function(*args, **kwargs)
             ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/usr/local/lib/python3.11/site-packages/litellm/main.py", line 4419, in aimage_generation
    raise exception_type(
  File "/usr/local/lib/python3.11/site-packages/litellm/main.py", line 4399, in aimage_generation
    _, custom_llm_provider, _, _ = get_llm_provider(
                                   ^^^^^^^^^^^^^^^^^
  File "/usr/local/lib/python3.11/site-packages/litellm/litellm_core_utils/get_llm_provider_logic.py", line 313, in get_llm_provider
    raise e
  File "/usr/local/lib/python3.11/site-packages/litellm/litellm_core_utils/get_llm_provider_logic.py", line 290, in get_llm_provider
    raise litellm.exceptions.BadRequestError(  # type: ignore
litellm.exceptions.BadRequestError: litellm.BadRequestError: LLM Provider NOT provided. Pass in the LLM provider you are trying to call. You passed model=my-model
 Pass model as E.g. For 'Huggingface' inference endpoints pass in `completion(model='huggingface/starcoder',..)` Learn more: https://docs.litellm.ai/docs/providers

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
  File "/usr/local/lib/python3.11/site-packages/litellm/main.py", line 4412, in aimage_generation
    response = await init_response  # type: ignore
               ^^^^^^^^^^^^^^^^^^^
  File "/app/custom_handler.py", line 34, in aimage_generation
    raise Exception(f"Error generating image: {str(e)}")
Exception: Error generating image: litellm.BadRequestError: LLM Provider NOT provided. Pass in the LLM provider you are trying to call. You passed model=my-model
 Pass model as E.g. For 'Huggingface' inference endpoints pass in `completion(model='huggingface/starcoder',..)` Learn more: https://docs.litellm.ai/docs/providers

Received Model Group=my-custom-model
Available Model Group Fallbacks=None LiteLLM Retried: 2 times, LiteLLM Max Retries: 3
INFO:     127.0.0.6:55369 - "POST /images/generations HTTP/1.1" 500 Internal Server Error

Twitter / LinkedIn details

No response

krrishdholakia commented 3 weeks ago

Unable to repro. This works for me.

Screenshot 2024-10-29 at 2 16 07 PM

please make sure you're using the latest version of litellm.

Here's my sample config for easy repro.

model_list:
  - model_name: my-custom-model
    litellm_params:
      model: my-custom-llm/my-custom-model
      api_key: my-fake-key

litellm_settings:
  custom_provider_map:
    - {"provider": "my-custom-llm", "custom_handler": .example_config_yaml.custom_handler.my_custom_llm}
mickymics commented 3 weeks ago

Hi I am using the k8s depployment:

apiVersion: apps/v1 kind: Deployment metadata: name: litellm-deployment namespace: litellm spec: replicas: 1 selector: matchLabels: app: litellm template: metadata: labels: app: litellm spec: containers:

Can you please let me know where is my mistake?

krrishdholakia commented 3 weeks ago

this could be a cached image? (check version on the swagger)

try running the docker locally to see if it works as expected. on our end this seems to be working fine.

if you can share your server startup logs that might help as well