zilliztech / GPTCache

Semantic cache for LLMs. Fully integrated with LangChain and llama_index.
https://gptcache.readthedocs.io
MIT License
6.96k stars 490 forks source link

[Bug]: #524

Closed nelsongallardo closed 9 months ago

nelsongallardo commented 11 months ago

Current Behavior

Trying to get this working as a cache using Langchain. Following this guide when I do

chain = load_qa_chain(llm, chain_type="stuff",prompt=prompt, verbose=False)

I get

TypeError: cannot pickle 'onnxruntime.capi.onnxruntime_pybind11_state.InferenceSession' object

Expected Behavior

I should get the answer from the LLM

Steps To Reproduce

No response

Environment

No response

Anything else?

No response

SimFG commented 9 months ago

the problem has been fixed in the 0.1.42, you can try it. looking forward your feedback