Closed nelsongallardo closed 9 months ago
Trying to get this working as a cache using Langchain. Following this guide when I do
chain = load_qa_chain(llm, chain_type="stuff",prompt=prompt, verbose=False)
I get
TypeError: cannot pickle 'onnxruntime.capi.onnxruntime_pybind11_state.InferenceSession' object
I should get the answer from the LLM
No response
the problem has been fixed in the 0.1.42, you can try it. looking forward your feedback
Current Behavior
Trying to get this working as a cache using Langchain. Following this guide when I do
I get
Expected Behavior
I should get the answer from the LLM
Steps To Reproduce
No response
Environment
No response
Anything else?
No response