zilliztech / GPTCache

Semantic cache for LLMs. Fully integrated with LangChain and llama_index.
https://gptcache.readthedocs.io
MIT License
6.89k stars 480 forks source link

How to work with LLamaIndex #596

Open nathangary opened 6 months ago

nathangary commented 6 months ago

index = GPTVectorStoreIndex.from_documents( documents, service_context=ServiceContext.from_defaults( llm_predictor=LLMPredictor(cache=gptcache_obj) ), ) query_engine = index.as_query_engine()

this is error.

LLMPredictor.init() got an unexpected keyword argument 'cache'

nathangary commented 6 months ago

LlamaIndex has been upgraded, how do I access it now?

SimFG commented 6 months ago

because the LlamaIndex has remove this param