BerriAI / litellm

Python SDK, Proxy Server (LLM Gateway) to call 100+ LLM APIs in OpenAI format - [Bedrock, Azure, OpenAI, VertexAI, Cohere, Anthropic, Sagemaker, HuggingFace, Replicate, Groq]
https://docs.litellm.ai/docs/
Other
12.7k stars 1.48k forks source link

[Bug]: moderation endpoint is broken #6020

Open Clad3815 opened 4 days ago

Clad3815 commented 4 days ago

What happened?

When using the moderation from openai, littlellm seem to send the prefix "openai/" to the openai api

    const response = await openai.moderations.create({
        input: text,
        // model: "omni-moderation-latest"
        model: "text-moderation-latest"
    });

Relevant log output

litellm-1  | INFO:     172.30.0.1:43458 - "POST /v1/moderations HTTP/1.1" 400 Bad Request
litellm-1  | 17:05:41 - LiteLLM Proxy:ERROR: proxy_server.py:5442 - litellm.proxy.proxy_server.moderations(): Exception occured - Error code: 400 - {'error': {'message': "Invalid value for 'model' = openai/text-moderation-latest. Please check the OpenAI documentation and try again.", 'type': 'invalid_request_error', 'param': 'model', 'code': None}

Twitter / LinkedIn details

No response

krrishdholakia commented 3 days ago

const response = await openai.moderations.create({ input: text, // model: "omni-moderation-latest" model: "text-moderation-latest" });

can i see how you set this up on your config? would help to repro

Clad3815 commented 3 days ago

I'm using the Docker version, and here is my config:


  - model_name: omni-moderation-latest
    litellm_params:
      model: openai/omni-moderation-latest

  - model_name: text-moderation-stable
    litellm_params:
      model: openai/text-moderation-stable

  - model_name: text-moderation-latest
    litellm_params:
      model: openai/text-moderation-latest