zilliztech / GPTCache

Semantic cache for LLMs. Fully integrated with LangChain and llama_index.
https://gptcache.readthedocs.io
MIT License
7.24k stars 506 forks source link

[Bug]: Use GPTCache with HuggingFacePipeline #653

Open ste3v0 opened 2 months ago

ste3v0 commented 2 months ago

Current Behavior

When I implement GPTCache according to Documentation it does not work.

I am using the GPTCache adapter of langchain and the Langchain Adapter for my embedding

In the end i call set_llm_cache(GPTCache(init_gptcache)

the error I am recieving is:

adapter.py-adapter:278 - WARNING: failed to save the data to cache, error: get_models..EmbeddingType.validate() takes 2 positional arguments but 3 were given

Can you please just tell me, that the functionality is not implemented yet for HuggingFacePipeline using local Llama3.1

Expected Behavior

No response

Steps To Reproduce

No response

Environment

No response

Anything else?

No response

SimFG commented 1 month ago

can you show your demo code, maybe i will can check it