Closed HelloWorldLTY closed 9 months ago
Thanks for your comment. Do you have access to Codex?
Currently, all InvalidRequestError will be labeled as "lengthErorr", but it is possible that the error was raised because of wrong API key or no access to the model. https://github.com/ncbi/GeneGPT/blob/1587ab23397a384062000abd62e7a08a791e6cce/main.py#L171
I would suggest printing out the exact errors here for debugging.
You can also try other models such as GPT-3.5-Turbo.
I am a little confused. I changed the api_key, and I think if I had api_key, I could access all of the models from Openai.
I will try 3.5, Thanks.
Some models are not generally accessible (e.g., code-davinci-002 since it is to be deprecated).
If possible, you could remove the try/except block and put the error message here, then I can better help.
Overall I would suggest switching to GPT-3.5-Turbo and GPT-4. The way of doing tool augmentation has changed dramatically in the past 8 months, and the code here is just for the reproducibility of the original Codex results. Now I guess people are doing tool augmentations through the function calling mechanism.
I see, thanks, and I got the detailed error:
Traceback (most recent call last):
File "main.py", line 172, in <module>
response = openai.Completion.create(**body)
File "/gpfs/gibbs/project/zhao/tl688/conda_envs/scgpt/lib/python3.8/site-packages/openai/api_resources/completion.py", line 25, in create
return super().create(*args, **kwargs)
File "/gpfs/gibbs/project/zhao/tl688/conda_envs/scgpt/lib/python3.8/site-packages/openai/api_resources/abstract/engine_api_resource.py", line 155, in create
response, _, api_key = requestor.request(
File "/gpfs/gibbs/project/zhao/tl688/conda_envs/scgpt/lib/python3.8/site-packages/openai/api_requestor.py", line 299, in request
resp, got_stream = self._interpret_response(result, stream)
File "/gpfs/gibbs/project/zhao/tl688/conda_envs/scgpt/lib/python3.8/site-packages/openai/api_requestor.py", line 710, in _interpret_response
self._interpret_response_line(
File "/gpfs/gibbs/project/zhao/tl688/conda_envs/scgpt/lib/python3.8/site-packages/openai/api_requestor.py", line 775, in _interpret_response_line
raise self.handle_error_response(
openai.error.InvalidRequestError: This is a chat model and not supported in the v1/completions endpoint. Did you mean to use v1/chat/completions?
It seems like a bug caused by competition. But if I used chatcompetition, I need to change prompt to message I think.
I intend to benchmark this model. Thanks.
What is the version of your openai package?
0.28.1. Should I change it to 0.27.7? I do not think there is large difference.
I just tested 0.27.7, and received same error with gpt 3.5:
Traceback (most recent call last):
File "/gpfs/gibbs/pi/zhao/tl688/GeneGPT/main.py", line 172, in <module>
response = openai.Completion.create(**body)
File "/gpfs/gibbs/project/zhao/tl688/conda_envs/cpsc488/lib/python3.9/site-packages/openai/api_resources/completion.py", line 25, in create
return super().create(*args, **kwargs)
File "/gpfs/gibbs/project/zhao/tl688/conda_envs/cpsc488/lib/python3.9/site-packages/openai/api_resources/abstract/engine_api_resource.py", line 153, in create
response, _, api_key = requestor.request(
File "/gpfs/gibbs/project/zhao/tl688/conda_envs/cpsc488/lib/python3.9/site-packages/openai/api_requestor.py", line 230, in request
resp, got_stream = self._interpret_response(result, stream)
File "/gpfs/gibbs/project/zhao/tl688/conda_envs/cpsc488/lib/python3.9/site-packages/openai/api_requestor.py", line 624, in _interpret_response
self._interpret_response_line(
File "/gpfs/gibbs/project/zhao/tl688/conda_envs/cpsc488/lib/python3.9/site-packages/openai/api_requestor.py", line 687, in _interpret_response_line
raise self.handle_error_response(
openai.error.InvalidRequestError: This is a chat model and not supported in the v1/completions endpoint. Did you mean to use v1/chat/completions?
@HelloWorldLTY For GPT-3.5, please run:
python main_turbo.py 001001
Let me know if it works.
Thanks, it works!
Hi, after running your example code, I received length error for each output result:
Are there any solutions to this problem? Thanks.