microsoft / Codex-CLI

CLI tool that uses Codex to turn natural language commands into their Bash/ZShell/PowerShell equivalents
MIT License
1.99k stars 182 forks source link

Error "Codex CLI error: Invalid request - The model: `code-davinci-002` does not exist" #129

Open ChadLevy opened 1 year ago

ChadLevy commented 1 year ago

Edit 3: looks like OpenAI shut down their Codex API (https://news.ycombinator.com/item?id=35242069). Apparently there was an e-mail but I never received one.

Also apparently the API is available through Azure (https://learn.microsoft.com/en-us/azure/cognitive-services/openai/how-to/work-with-code). Perhaps an alternative would be to use Azure?

Original:


I'm getting this error on each attempt to use the codex:

Codex CLI error: Invalid request - The model: `code-davinci-002` does not exist

I haven't dug into the code at all but I noticed all of the OpenAI beta URLs have been removed. In the install instructions most redirect but the engines listing URL returns 404 (https://beta.openai.com/docs/engines/codex-series-private-beta). The new URL is https://platform.openai.com/docs/models/codex. It still shows it's in private beta on the new URL.

Edit:

When I query a list of OpenAI engines available to my account, code-davinci-002 is not listed. So I reran the install script and select a different engine available to me (gpt-3.5-turbo). After restarting PowerShell I'm still seeing the same error. I confirmed that the openaiapirc file was correctly updated with the new engine and verified that all instances of PowerShell had been shut down but am still seeing the same error.

Edit 2: I also updated the current_context.config file with the updated engine. Now the error I receive is

Codex CLI error: Invalid request - This is a chat model and not supported in the v1/completions endpoint. Did you mean to use v1/chat/completions?
kerbymart commented 1 year ago

It seems code-davinci-002 and code-cushman-001 have been removed, but yes, the Codex CLI does not seem to support gpt-3.5-turbo at the same time.

hablutzel1 commented 1 year ago

This fork is intended to fix this, https://github.com/Lukas-LLS/Codex-CLI. A PR from @Lukas-LLS would be great.

cyrrill commented 1 year ago

Note:

As of March 2023, the Codex models are now deprecated. Please check out our newer Chat models which are able to do many coding tasks with similar capability

as per:

image

https://platform.openai.com/docs/guides/code

Use gpt-3.5-turbo for best results! Once you get API access to GPT-4, move up to that, no point in using Codex models, like davinci anymore.

loftusa commented 1 year ago

I'm still getting "cannot find openAI model" errors, even with gpt-3.5-turbo as my model. EDIT: That was with using the fork mentioned above: https://github.com/Lukas-LLS/Codex-CLI

Using the main branch, I get Codex CLI error: Invalid request - This is a chat model and not supported in the v1/completions endpoint. Did you mean to use v1/chat/completions? when I use gpt-3.5-turbo as my model, and gpt-4 straight up doesn't work.

I use this tool every day, so a bugfix would be great!

Lukas-LLS commented 1 year ago

I can think of two possible causes for your problem:

  1. You might have an old version of the fork, because at the time my fork was posted into this issue I was still working on it and at that point it was not operational (If you update to a newer version make sure to use a cleanup script and setup thereafter, because there a some changes that will break an older setup)
  2. It could also be possible that you still have the setup from https://github.com/microsoft/Codex-CLI. If that is the case, you should run the cleanup script from the original project before running the setup from the fork. (I don't know how you migrated to the fork, so this is just a possibility)

And for gpt-4 make sure you have the model available to you, for that to be the case you must have either signed up on the waitlist: https://openai.com/waitlist/gpt-4-api (from the waitlist you gain access to the gpt-4 model) or you must have the ChatGPT Plus subscription: https://chat.openai.com/chat in the lower left corner the button Upgrade to Plus (from ChatGPT Plus you gain access to the gpt-4-32k model)

If your issue still persists after these steps, let me know and I will look further into it

loftusa commented 1 year ago

@Lukas-LLS still having problems. Both gpt-4-32k and gpt-3.5-turbo return Cannot find OpenAI model errors. I have Chat GPT Plus.

Screenshot 2023-03-31 at 6 54 09 AM

The only difference I can think of between what I did and the installation instructions is that I copied my openAI secret key from where I originally stored it - since I think the picture where you can copy it directly from the OpenAI website is outdated (I cannot do that)

hablutzel1 commented 1 year ago

@Lukas-LLS , could you consider submitting a PR to the official project?

Lukas-LLS commented 1 year ago

I already have submitted a PR #131

Lukas-LLS commented 1 year ago

@loftusa I have found and fixed your problem. It was limited to the zsh_setup.sh script, therefore I did not find it immediately. The problem was that in some cases the variable name of modelId was modelID. That was due to me carelessly refactoring the name without checking it afterwards. The problem should be fixed with the latest commit.

I also found out that the zsh_setup.sh does only work for macOS and not for zsh under normal Linux due to different implementations of the sed command under these two different operating systems.

AntonOsika commented 1 year ago

Support for Chat models (GPT 3.5/4) now works on my fork!

Feel free to use it here:

https://github.com/AntonOsika/CLI-Co-Pilot

The required changes in the code were small but non-obvious.

hablutzel1 commented 1 year ago

@AntonOsika , for Bash, your fork is inserting an space at the beginning of the generated commands preventing them to be stored in the history. Can I change this behavior by configuration?

AntonOsika commented 1 year ago

Thanks for pointing this out!

Would love if you submit a MR to make it depend on which shell is being used.

On Sat, Apr 8, 2023 at 2:48 AM Jaime Hablutzel @.***> wrote:

@AntonOsika https://github.com/AntonOsika , for Bash, your fork is inserting an space at the beginning of the generated commands preventing them to be stored in the history. Can I change this behavior by configuration?

— Reply to this email directly, view it on GitHub https://github.com/microsoft/Codex-CLI/issues/129#issuecomment-1500745745, or unsubscribe https://github.com/notifications/unsubscribe-auth/ABCCSUK2OAAQL6VWASFMO6TXACYUNANCNFSM6AAAAAAWGZRAWQ . You are receiving this because you were mentioned.Message ID: @.***>

-- Anton Osika

Fatfish588 commented 1 year ago

@loftusa I have found and fixed your problem. It was limited to the zsh_setup.sh script, therefore I did not find it immediately. The problem was that in some cases the variable name of modelId was modelID. That was due to me carelessly refactoring the name without checking it afterwards. The problem should be fixed with the latest commit.

I also found out that the zsh_setup.sh does only work for macOS and not for zsh under normal Linux due to different implementations of the sed command under these two different operating systems.

I found that after pressing ctrl+G, the generated commands will only be displayed after the issue I raised.like this:

what is running on port 3306sudo lsof -i :3306

image

Lukas-LLS commented 1 year ago

That was the original behavior of Codex-CLI. I found that `# Your comment here` && had worked for this behavior, although I did not like that way of writing a prompt. I have now changed that way the command is insert in bash and zsh to match Powershell allowing to write normal comments then hitting Ctrl+G and then the command appearing in a new line below the comment. For this to take effect you must update your local repository.

Fatfish588 commented 1 year ago

That was the original behavior of Codex-CLI. I found that `# Your comment here` && had worked for this behavior, although I did not like that way of writing a prompt. I have now changed that way the command is insert in bash and zsh to match Powershell allowing to write normal comments then hitting Ctrl+G and then the command appearing in a new line below the comment. For this to take effect you must update your local repository.

Now it can run great. Thank you.🙏

pripishchik commented 1 year ago

Support for Chat models (GPT 3.5/4) now works on my fork!

Feel free to use it here:

https://github.com/AntonOsika/CLI-Co-Pilot

The required changes in the code were small but non-obvious.

getting this error in zsh:

Codex CLI error: Unexpected exception - module 'openai' has no attribute 'ChatCompletion'

do you know how to fix it?

hablutzel1 commented 1 year ago

do you know how to fix it?

Try updating the openai package with pip.

pripishchik commented 1 year ago

@hablutzel1 looks like it works, thanks!!!