openai / openai-python

The official Python library for the OpenAI API
https://pypi.org/project/openai/
Apache License 2.0
23.08k stars 3.25k forks source link

openai migration error #1876

Open Shihab-Litu opened 3 days ago

Shihab-Litu commented 3 days ago

Confirm this is an issue with the Python library and not an underlying OpenAI API

Describe the bug

I want to migrate my code for the openai = 1.54.4 version. But got the below error when I run the "openai migrate" command in linux environment. Error: Error: Failed to download Grit CLI from https://github.com/getgrit/gritql/releases/latest/download/marzano-x86_64-unknown-linux-gnu.tar.gz

To Reproduce

error picture have attached:

Code snippets

No response

OS

Linux 20.04 LTS

Python version

Python 3.9.12

Library version

openai 1.54.4

RobertCraigie commented 3 days ago

Can you try installing the Grit CLI from npm? https://docs.grit.io/cli/quickstart#installation

Then you can run grit apply openai to get the same migration.

Jesus0510-max commented 1 day ago

me aparece el sig. error: You tried to access openai.ChatCompletion, but this is no longer supported in openai>=1.0.0 - see the README at https://github.com/openai/openai-python for the API.

You can run openai migrate to automatically upgrade your codebase to use the 1.0.0 interface.

A detailed migration guide is available here: https://github.com/openai/openai-python/discussions/742

Shihab-Litu commented 1 day ago

@RobertCraigie This is helpful. I can migrate my code(https://github.com/InfiAgent/InfiAgent/blob/main/pipeline/src/infiagent/llm/client/llama.py) But got the error aclient (AsynchOpenAI) is not defined while I run the code. Though I have defined aclient. def init(self, data): super().init(data) client = OpenAI(api_key="", api_base="http://localhost:8000/v1") aclient = AsyncOpenAI(api_key="", api_base="http://localhost:8000/v1")

Is it correct way to define the client and aclient instead of globally? Note: I have run the the vllm server in local pc. But dont give me any answer for the prompt. Its shows due to LLM fails.

server_running

RobertCraigie commented 1 day ago

you'll need to replace any reference to AsynchOpenAI with AsyncOpenAI