Open LYH-YF opened 12 months ago
Hi @LYH-YF, the GSM8K experiment is based on the GPT-3.5-Turbo-0301 completion model.
Due to recent changes in OpenAI's API, the 3.5-turbo-0301 completion mode is no longer available, but it can be obtained through Azure OpenAI.
In addition, the reason for the poor performance of the chat mode is
"stop": "\n\n"
parameter; Thanks for your reply. I removed the parameter stop
, and the result reached at 0.68+. So there may be a gap of about 0.10 between chat mode and completion mode (GPT-3.5-Turbo-0301).
Hello, can I have a look at your COT.ipynb file? My openai is version 1.0.0 and I am using:
import openai
openai.api_key = "sk-XXX"
as well as
import json
instruction = "Please reference the following examples to answer the math question,\n"
prompt = instruction + prompt_complex + "\n\nQuestion: " + question
request_data = {
"messages": [{"role": "system", "content": ""}, {"role": "user", "content": prompt}],
"max_tokens": 400,
"temperature": 0,
"top_p": 1,
"n": 1,
"stream": False,
"stop": "\n\n",
}
response = openai.ChatCompletion.create(
model="gpt-3.5-turbo-0301",
**request_data,
)
But an error was reported
APIRemovedInV1:
You tried to access openai.ChatCompletion, but this is no longer supported in openai>=1.0.0 - see the README at https://github.com/openai/openai-python for the API.
You can run `openai migrate` to automatically upgrade your codebase to use the 1.0.0 interface.
Alternatively, you can pin your installation to the old version, e.g. `pip install openai==0.28`
A detailed migration guide is available here: https://github.com/openai/openai-python/discussions/742
I don't know how to solve it, so can you please send me the code you changed? Thank you very much!
I installed the version 0.27.4 for runing the code
examples/CoT.ipynb
some error raised when running the following lineI update openai to version 0.28.1, the error also exists. Updating newer version doesn't work. So I change the code according to the error. It seems
gpt-3.5-torbo-0301
only used forChatCompletion
.The finally output is 0.439, far from 0.78+.
Is there any suggestion for me?
gpt-3.5-torbo-0301
seems not good.