Open MeghaWalia-eco opened 9 months ago
You only need to deal with pydantic's check of attributes in the class, and naturally you can use GPTCache. Or you can build an openai proxy service and use GPTCache in the service.
I am not getting any pydantic error but when i am trying to set or retrieve the cache key, I am getting errors But here self.cache[cache_key] = result and return self.cache[cache_key] lines are throwing errors and it is not working.
Can i get an example using above code on how to do that
The error is ?
File "C:\AILatestClone\EconomistDigitalSolutions\openai-hack\app\service\CachedLLMPredictor.py", line 22, in predict
if cache_key in self.cache:
^^^^^^^^^^^^^^^^^^^^^^^
TypeError: argument of type 'GPTCache' is not iterable
File "C:\AILatestClone\EconomistDigitalSolutions\openai-hack\app\service\CachedLLMPredictor.py", line 26, in predict
self.cache[cache_key] = result
TypeError: 'GPTCache' object does not support item assignment
I think i am not accessing the cache correctly
@MeghaWalia-eco
Were you able to solve this issue?
@SachinGanesh No
Current Behavior
I am trying to integrate GPTCache with llama index but LLM predictor is not accepting cache argument , to fix this i have created a cacheLLMPredictor class extended from LLM Predictor
But here self.cache[cache_key] = result and return self.cache[cache_key] lines are throwing errors and it is not working.
My actual problem is i have to add GPTCache to the existing LLamaIndex calls , my existing implementation is as below
def load_index(self, tenant_index: Index, tenant_config: Config, model_name: Optional[str] = None):
Expected Behavior
need ti implement gpt caching in llm calls
Steps To Reproduce
Environment
No response
Anything else?
No response