BerriAI / litellm

Python SDK, Proxy Server (LLM Gateway) to call 100+ LLM APIs in OpenAI format - [Bedrock, Azure, OpenAI, VertexAI, Cohere, Anthropic, Sagemaker, HuggingFace, Replicate, Groq]
https://docs.litellm.ai/docs/
Other
12.55k stars 1.46k forks source link

[Bug]: await twice for acompletion of custom model #4458

Closed MarkintoshZ closed 3 months ago

MarkintoshZ commented 3 months ago

What happened?

Description

When using a custom model, the acompletion function returns a coroutine that returns a coroutine so we need to await twice to get the completion result.

To Reproduce Bug

python 3.10 litellm 1.38.10

import asyncio
import litellm

async def main():
    result = await (await litellm.acompletion(
        model="custom/gpt-3.5-turbo-0125",
        base_url="https://api.openai.com/v1",
        api_key="sk-*****************************************************",
        messages=[{"role": "user", "content": "hi"}],
    ))
    print(result)

asyncio.run(main())

Relevant log output

No response

Twitter / LinkedIn details

No response

krrishdholakia commented 3 months ago

custom/ is deprecated in favor of openai/.

I'll update docs for this as well. Can you run with openai/ and let me know if the error persists.

https://docs.litellm.ai/docs/providers/openai_compatible