microsoft / gpt-review

MIT License
257 stars 48 forks source link

[Bug Report]: gpt-35-turbo is not a valid OpenAI API model #127

Closed klutchell closed 1 year ago

klutchell commented 1 year ago

Module path

gpt ask "What is the capital of France?" --debug --fast

review-gpt CLI version

0.9.4

Describe the bug

When using OpenAI API directly (not with Azure) the model passed to the API is not valid.

{"model": "gpt-35-turbo", "messages": [{"role": "user", "content": "What is the capital of France?"}], "max_tokens": 100, "temperature": 0.7, "top_p": 0.5, "frequency_penalty": 0.5, "presence_penalty": 0}

The model name should be gpt-3.5-turbo as per the docs.

To reproduce

export OPENAI_API_KEY=****
unset AZURE_OPENAI_API
unset AZURE_OPENAI_API_KEY
gpt ask "What is the capital of France?" --debug --fast

Code snippet

No response

Relevant log output

cli.knack.cli: Command arguments: ['ask', 'What is the capital of France?', '--debug', '--fast']
cli.knack.cli: __init__ debug log:
Enable color in terminal.
cli.knack.cli: Event: Cli.PreExecute []
cli.knack.cli: Event: CommandParser.OnGlobalArgumentsCreate [<function CLILogging.on_global_arguments at 0x102ee2170>, <function OutputProducer.on_global_arguments at 0x103068820>, <function CLIQuery.on_global_arguments at 0x103085b40>]
cli.knack.cli: Event: CommandInvoker.OnPreCommandTableCreate []
cli.knack.cli: Event: CommandLoader.OnLoadArguments []
cli.knack.cli: Event: CommandInvoker.OnPostCommandTableCreate []
cli.knack.cli: Event: CommandInvoker.OnCommandTableLoaded []
cli.knack.cli: Event: CommandInvoker.OnPreParseArgs []
cli.knack.cli: Event: CommandInvoker.OnPostParseArgs [<function OutputProducer.handle_output_argument at 0x1030688b0>, <function CLIQuery.handle_query_parameter at 0x103085bd0>]
This command is in preview. It may be changed/removed in a future release.
root: Prompt sent to GPT: What is the capital of France?

root: Model Selected based on prompt size: gpt-35-turbo
root: Using Open AI.
openai: message='Request to OpenAI API' method=post path=https://api.openai.com/v1/chat/completions
openai: api_version=None data='{"model": "gpt-35-turbo", "messages": [{"role": "user", "content": "What is the capital of France?"}], "max_tokens": 100, "temperature": 0.7, "top_p": 0.5, "frequency_penalty": 0.5, "presence_penalty": 0}' message='Post details'
urllib3.util.retry: Converted retries value: 2 -> Retry(total=2, connect=None, read=None, redirect=None, status=None)
urllib3.connectionpool: Starting new HTTPS connection (1): api.openai.com:443
urllib3.connectionpool: https://api.openai.com:443 "POST /v1/chat/completions HTTP/1.1" 404 None
openai: message='OpenAI API response' path=https://api.openai.com/v1/chat/completions processing_ms=None request_id=8dcbe0a951e6de5e5e037c01cab50d3e response_code=404
openai: error_code=None error_message='The model `gpt-35-turbo` does not exist' error_param=None error_type=invalid_request_error message='OpenAI API error received' stream_error=False
cli.knack.cli: The model `gpt-35-turbo` does not exist
Traceback (most recent call last):
  File "/nix/store/zhlpbm7i908l99qs3q6vigzcb7pw0wmk-gpt-review-venv/lib/python3.10/site-packages/knack/cli.py", line 233, in invoke
    cmd_result = self.invocation.execute(args)
  File "/nix/store/zhlpbm7i908l99qs3q6vigzcb7pw0wmk-gpt-review-venv/lib/python3.10/site-packages/knack/invocation.py", line 224, in execute
    cmd_result = parsed_args.func(params)
  File "/nix/store/zhlpbm7i908l99qs3q6vigzcb7pw0wmk-gpt-review-venv/lib/python3.10/site-packages/knack/commands.py", line 146, in __call__
    return self.handler(*args, **kwargs)
  File "/nix/store/zhlpbm7i908l99qs3q6vigzcb7pw0wmk-gpt-review-venv/lib/python3.10/site-packages/knack/commands.py", line 253, in _command_handler
    result = op(client, **command_args) if client else op(**command_args)
  File "/nix/store/zhlpbm7i908l99qs3q6vigzcb7pw0wmk-gpt-review-venv/lib/python3.10/site-packages/gpt_review/_ask.py", line 121, in _ask
    response = _call_gpt(
  File "/nix/store/zhlpbm7i908l99qs3q6vigzcb7pw0wmk-gpt-review-venv/lib/python3.10/site-packages/gpt_review/_openai.py", line 102, in _call_gpt
    completion = openai.ChatCompletion.create(
  File "/nix/store/zhlpbm7i908l99qs3q6vigzcb7pw0wmk-gpt-review-venv/lib/python3.10/site-packages/openai/api_resources/chat_completion.py", line 25, in create
    return super().create(*args, **kwargs)
  File "/nix/store/zhlpbm7i908l99qs3q6vigzcb7pw0wmk-gpt-review-venv/lib/python3.10/site-packages/openai/api_resources/abstract/engine_api_resource.py", line 153, in create
    response, _, api_key = requestor.request(
  File "/nix/store/zhlpbm7i908l99qs3q6vigzcb7pw0wmk-gpt-review-venv/lib/python3.10/site-packages/openai/api_requestor.py", line 230, in request
    resp, got_stream = self._interpret_response(result, stream)
  File "/nix/store/zhlpbm7i908l99qs3q6vigzcb7pw0wmk-gpt-review-venv/lib/python3.10/site-packages/openai/api_requestor.py", line 624, in _interpret_response
    self._interpret_response_line(
  File "/nix/store/zhlpbm7i908l99qs3q6vigzcb7pw0wmk-gpt-review-venv/lib/python3.10/site-packages/openai/api_requestor.py", line 687, in _interpret_response_line
    raise self.handle_error_response(
openai.error.InvalidRequestError: The model `gpt-35-turbo` does not exist
cli.knack.cli: Event: Cli.PostExecute []
klutchell commented 1 year ago

Related comment: https://github.com/hwchase17/langchain/issues/1591#issuecomment-1497865402

klutchell commented 1 year ago

Workaround seems to be to use azure.yaml to override the model name for OpenAI endpoints.

# azure.yaml
azure_model_map:
    turbo_llm_model_deployment_id: gpt-3.5-turbo
    smart_llm_model_deployment_id: gpt-4
    large_llm_model_deployment_id: gpt-4-32k
    embedding_model_deployment_id: text-embedding-ada-002