zilliztech / GPTCache

Semantic cache for LLMs. Fully integrated with LangChain and llama_index.
https://gptcache.readthedocs.io
MIT License
6.96k stars 490 forks source link

[Bug]: TypeError: metaclass conflict: the metaclass of a derived class must be a (non-strict) subclass of the metaclasses of all its bases #533

Open hotpeppeper opened 10 months ago

hotpeppeper commented 10 months ago

Current Behavior

from gptcache.adapter.langchain_models import LangChainChat

Traceback (most recent call last): File "/home/ld/miniconda3/envs/llm/lib/python3.10/runpy.py", line 196, in _run_module_as_main return _run_code(code, main_globals, None, File "/home/ld/miniconda3/envs/llm/lib/python3.10/runpy.py", line 86, in _run_code exec(code, run_globals) File "/home/ld/.vscode-server/extensions/ms-python.python-2023.14.0/pythonFiles/lib/python/debugpy/adapter/../../debugpy/launcher/../../debugpy/main.py", line 39, in cli.main() File "/home/ld/.vscode-server/extensions/ms-python.python-2023.14.0/pythonFiles/lib/python/debugpy/adapter/../../debugpy/launcher/../../debugpy/../debugpy/server/cli.py", line 430, in main run() File "/home/ld/.vscode-server/extensions/ms-python.python-2023.14.0/pythonFiles/lib/python/debugpy/adapter/../../debugpy/launcher/../../debugpy/../debugpy/server/cli.py", line 284, in run_file runpy.run_path(target, run_name="main") File "/home/ld/.vscode-server/extensions/ms-python.python-2023.14.0/pythonFiles/lib/python/debugpy/_vendored/pydevd/_pydevd_bundle/pydevd_runpy.py", line 321, in run_path return _run_module_code(code, init_globals, run_name, File "/home/ld/.vscode-server/extensions/ms-python.python-2023.14.0/pythonFiles/lib/python/debugpy/_vendored/pydevd/_pydevd_bundle/pydevd_runpy.py", line 135, in _run_module_code _run_code(code, mod_globals, init_globals, File "/home/ld/.vscode-server/extensions/ms-python.python-2023.14.0/pythonFiles/lib/python/debugpy/_vendored/pydevd/_pydevd_bundle/pydevd_runpy.py", line 124, in _run_code exec(code, run_globals) File "/home/ld/code/cache_t/gptcache_t.py", line 14, in from cache import MyLangChainChat File "/home/ld/code/cache_t/cache/init.py", line 1, in from .langchainchat import MyLangChainChat File "/home/ld/code/cache_t/cache/langchainchat.py", line 4, in from gptcache.adapter.langchain_models import LangChainChat File "/home/ld/miniconda3/envs/llm/lib/python3.10/site-packages/gptcache/adapter/langchain_models.py", line 30, in class LangChainLLMs(LLM, BaseModel): TypeError: metaclass conflict: the metaclass of a derived class must be a (non-strict) subclass of the metaclasses of all its bases

Expected Behavior

No response

Steps To Reproduce

No response

Environment

ubuntu: 22.04 LTS
gptcache: 0.1.40

Anything else?

No response

SimFG commented 10 months ago

This error seems to be caused by an incompatibility issue between langchain and gptcache. You can try using an older version of langchain.

BastienKovac commented 9 months ago

Looking into it, it looks like the metaclass from Langchain's LLM and pydantic's BaseModel are mismatched :

image

So the hierarchy from LangChainLLMs raises an error.

I did some dirty testing and it seems that simply removing BaseModel from the hierarchy fixes the issue. I however don't have the big picture knowledge necessary to know what are the impacts of that change

Any chance for a fix ?

SimFG commented 9 months ago

This seems to need to be fixed in langchain repo

BastienKovac commented 9 months ago

This issue seems to still be present using Langchain v0.0.312 and GPTCache v0.1.42 :

>>> from gptcache.adapter.langchain_models import LangChainLLMs
Traceback (most recent call last):
  File "<stdin>", line 1, in <module>
  File "/home/kovac/.pyenv/versions/ENV/lib/python3.11/site-packages/gptcache/adapter/langchain_models.py", line 30, in <module>
    class LangChainLLMs(LLM, BaseModel):
TypeError: metaclass conflict: the metaclass of a derived class must be a (non-strict) subclass of the metaclasses of all its bases
SimFG commented 9 months ago

Thank you very much for your feedback, I will fix it when I have free time. Of course, if you know more about this kind of problem, I look forward to your fix pr

yacchi commented 9 months ago

This problem occurs when Pydantic v2 is installed.

It is explained on the Langchain side and can be solved by using Pydantic v1 and v2 without mixing them.(https://python.langchain.com/docs/guides/pydantic_compatibility)

The following classes used in gptcache/adapter/langchain_models.py both already inherit from pydantic.BaseModel.

Currently, the Langchain side explicitly uses Pydantic v1, so when the GPTCache side is in a situation to inherit v2, an error occurs.

As already suggested, it would be best to remove BaseModel inheritance from LangChainLLMs and LangChainChat.

langchain llms base langchain chat_models base

I am currently encountering this issue myself, and since I cannot lower the version of Pydantic or Langchain, I have added code to my repository to work around it. May I submit this as a pull request?

SimFG commented 9 months ago

yes, you can do it

anggara-kaskus commented 9 months ago

thank you very much @yacchi . i've tried to use your forked repository and it works well!