When I implement GPTCache according to Documentation it does not work.
I am using the GPTCache adapter of langchain and the Langchain Adapter for my embedding
In the end i call
set_llm_cache(GPTCache(init_gptcache)
the error I am recieving is:
adapter.py-adapter:278 - WARNING: failed to save the data to cache, error: get_models..EmbeddingType.validate() takes 2 positional arguments but 3 were given
Can you please just tell me, that the functionality is not implemented yet for HuggingFacePipeline using local Llama3.1
Current Behavior
When I implement GPTCache according to Documentation it does not work.
I am using the GPTCache adapter of langchain and the Langchain Adapter for my embedding
In the end i call set_llm_cache(GPTCache(init_gptcache)
the error I am recieving is:
adapter.py-adapter:278 - WARNING: failed to save the data to cache, error: get_models..EmbeddingType.validate() takes 2 positional arguments but 3 were given
Can you please just tell me, that the functionality is not implemented yet for HuggingFacePipeline using local Llama3.1
Expected Behavior
No response
Steps To Reproduce
No response
Environment
No response
Anything else?
No response