confident-ai / deepeval

The LLM Evaluation Framework
https://docs.confident-ai.com/
Apache License 2.0
2.44k stars 171 forks source link

Dramatically simpler and more reliable cache #775

Open prescod opened 2 months ago

prescod commented 2 months ago

It's my belief that the cache can be dramatically simplified, and made more reliable, by using the Python "diskcache" library.

DiskCache handles so much:

I think that hundreds of lines of code could be replaced with diskcache.

JSON is a poor format for a cache because it is hard to update it transactionally and incrementally. If you Ctrl-C in the middle of a write, you'll end up with corrupted data.

Furthermore, I think that the right unit of Cache is the LLM response.

This will solve the problem where some kinds of tests are cached and others are not.

I will attach two files that show how I monkey-patched to add a much more reliable caching system with few lines of code.

caching.zip

It might be cleaner to add caching to the DeepEvalBaseLLM but then I'd need to change every place it is called, so this monkeypatching hack worked better for me. I turned off the DeepEval cache because I was frustrated with it.

Halpph commented 2 weeks ago

Wow! this sounds like a very big improvement as I'm having troubles with the caching myself! Are you working on a PR already? @prescod