zilliztech / GPTCache

Semantic cache for LLMs. Fully integrated with LangChain and llama_index.
https://gptcache.readthedocs.io
MIT License
7.14k stars 503 forks source link

[Bug]: GPTCache Server and Libraries Failed for openai==1.0.0 #570

Open ephemeral2eternity opened 10 months ago

ephemeral2eternity commented 10 months ago

Current Behavior

The following command failed when running gptcache server.

$ gptcache_server -s 127.0.0.1 -p 8000

Or

$ docker pull zilliz/gptcache:latest
$ docker run -p 8000:8000 -it zilliz/gptcache:latest

The errors are shown below.

successfully installed package: openai
Traceback (most recent call last):
  File "/usr/local/bin/gptcache_server", line 5, in <module>
    from gptcache_server.server import main
  File "/usr/local/lib/python3.8/site-packages/gptcache_server/server.py", line 8, in <module>
    from gptcache.adapter import openai
  File "/usr/local/lib/python3.8/site-packages/gptcache/adapter/openai.py", line 31, in <module>
    class ChatCompletion(openai.ChatCompletion, BaseCacheLLM):
  File "/usr/local/lib/python3.8/site-packages/openai/_utils/_proxy.py", line 22, in __getattr__
    return getattr(self.__get_proxied__(), attr)
  File "/usr/local/lib/python3.8/site-packages/openai/_utils/_proxy.py", line 43, in __get_proxied__
    return self.__load__()
  File "/usr/local/lib/python3.8/site-packages/openai/lib/_old_api.py", line 33, in __load__
    raise APIRemovedInV1(symbol=self._symbol)
openai.lib._old_api.APIRemovedInV1:

You tried to access openai.ChatCompletion, but this is no longer supported in openai>=1.0.0 - see the README at https://github.com/openai/openai-python for the API.

You can run `openai migrate` to automatically upgrade your codebase to use the 1.0.0 interface.

Alternatively, you can pin your installation to the old version, e.g. `pip install openai==0.28`

A detailed migration guide is available here: https://github.com/openai/openai-python/discussions/742

Expected Behavior

> gptcache_server -s 127.0.0.1 -p 8000
None of PyTorch, TensorFlow >= 2.0, or Flax have been found. Models won't be available and only tokenizers, configuration and file/data utilities can be used.
INFO:     Started server process [8545]
INFO:     Waiting for application startup.
INFO:     Application startup complete.
INFO:     Uvicorn running on http://127.0.0.1:8000 (Press CTRL+C to quit)
INFO:     127.0.0.1:61051 - "POST /get HTTP/1.1" 200 OK

Steps To Reproduce

1. docker pull zilliz/gptcache:latest
2. docker run -p 8000:8000 -it zilliz/gptcache:latest

Environment

No response

Anything else?

No response

SimFG commented 9 months ago

same issue: #576

yudhiesh commented 8 months ago

Any plans to fix this? The server is completely usable.