signebedi / gptty

ChatGPT wrapper in your TTY
MIT License
47 stars 7 forks source link

[bug] openai.error.InvalidRequestError: This is a chat model and not supported in the v1/completions endpoint. Did you mean to use v1/chat/completions? #30

Closed signebedi closed 1 year ago

signebedi commented 1 year ago

Using gpt-3.5-turbo model:

[question] tell me three interesting world capital cities 
.....    Traceback (most recent call last):
  File "/home/sig/Code/gptty/venv/bin/gptty", line 11, in <module>
    load_entry_point('gptty', 'console_scripts', 'gptty')()
  File "/home/sig/Code/gptty/venv/lib/python3.8/site-packages/click/core.py", line 1130, in __call__
    return self.main(*args, **kwargs)
  File "/home/sig/Code/gptty/venv/lib/python3.8/site-packages/click/core.py", line 1055, in main
    rv = self.invoke(ctx)
  File "/home/sig/Code/gptty/venv/lib/python3.8/site-packages/click/core.py", line 1657, in invoke
    return _process_result(sub_ctx.command.invoke(sub_ctx))
  File "/home/sig/Code/gptty/venv/lib/python3.8/site-packages/click/core.py", line 1404, in invoke
    return ctx.invoke(self.callback, **ctx.params)
  File "/home/sig/Code/gptty/venv/lib/python3.8/site-packages/click/core.py", line 760, in invoke
    return __callback(*args, **kwargs)
  File "/home/sig/Code/gptty/gptty/__main__.py", line 77, in chat
    asyncio.run(chat_async_wrapper(config_path))
  File "/usr/lib/python3.8/asyncio/runners.py", line 44, in run
    return loop.run_until_complete(main)
  File "/usr/lib/python3.8/asyncio/base_events.py", line 616, in run_until_complete
    return future.result()
  File "/home/sig/Code/gptty/gptty/__main__.py", line 107, in chat_async_wrapper
    await create_chat_room(configs=configs, config_path=config_path)
  File "/home/sig/Code/gptty/gptty/gptty.py", line 137, in create_chat_room
    response = await response_task
  File "/home/sig/Code/gptty/gptty/gptty.py", line 43, in fetch_response
    return await openai.Completion.acreate(
  File "/home/sig/Code/gptty/venv/lib/python3.8/site-packages/openai/api_resources/completion.py", line 45, in acreate
    return await super().acreate(*args, **kwargs)
  File "/home/sig/Code/gptty/venv/lib/python3.8/site-packages/openai/api_resources/abstract/engine_api_resource.py", line 217, in acreate
    response, _, api_key = await requestor.arequest(
  File "/home/sig/Code/gptty/venv/lib/python3.8/site-packages/openai/api_requestor.py", line 310, in arequest
    resp, got_stream = await self._interpret_async_response(result, stream)
  File "/home/sig/Code/gptty/venv/lib/python3.8/site-packages/openai/api_requestor.py", line 645, in _interpret_async_response
    self._interpret_response_line(
  File "/home/sig/Code/gptty/venv/lib/python3.8/site-packages/openai/api_requestor.py", line 682, in _interpret_response_line
    raise self.handle_error_response(
openai.error.InvalidRequestError: This is a chat model and not supported in the v1/completions endpoint. Did you mean to use v1/chat/completions?

https://stackoverflow.com/questions/75774873/openai-chatgpt-gpt-3-5-api-error-this-is-a-chat-model-and-not-supported-in-t

signebedi commented 1 year ago

[model] add support for chatcompletion models Currently, we only support completion models. Subsequently, we will need to find a way to support chat completion too.

References

  1. chat completion source code python https://github.com/openai/openai-python/blob/main/openai/api_resources/chat_completion.py#L8
  2. chat completion create https://platform.openai.com/docs/api-reference/chat/create
  3. chat completions general introduction https://platform.openai.com/docs/guides/chat
  4. list of models https://platform.openai.com/docs/models/overview
signebedi commented 1 year ago

This is closed by #31 .