Closed MarkintoshZ closed 3 months ago
When using a custom model, the acompletion function returns a coroutine that returns a coroutine so we need to await twice to get the completion result.
acompletion
python 3.10 litellm 1.38.10
import asyncio import litellm async def main(): result = await (await litellm.acompletion( model="custom/gpt-3.5-turbo-0125", base_url="https://api.openai.com/v1", api_key="sk-*****************************************************", messages=[{"role": "user", "content": "hi"}], )) print(result) asyncio.run(main())
No response
custom/ is deprecated in favor of openai/.
custom/
openai/
I'll update docs for this as well. Can you run with openai/ and let me know if the error persists.
https://docs.litellm.ai/docs/providers/openai_compatible
What happened?
Description
When using a custom model, the
acompletion
function returns a coroutine that returns a coroutine so we need to await twice to get the completion result.To Reproduce Bug
python 3.10 litellm 1.38.10
Relevant log output
No response
Twitter / LinkedIn details
No response