Moved cache logic from description services (GPT4, LlamaCPP, ...) to a new description service wrapper class: CachedDescriptionService. Then wrapped each model's description service in the CachedDescriptionService. This fixed a bug where the cache was only updated by OpenAI description services.
Modified cache to use multiple files (1 per description service + the original cache file) instead of just 1 file. The original cache file is now just a read-only fallback and no longer updated.
Related issues
closes #32
How was this tested?
Manual testing!
Test cases:
[x] no cache, describe new image
[x] fallback cache, describe new image
[x] fallback cache, describe image from fallback cache
[x] all caches, describe new image
[x] all caches, describe image from model's cache
[x] all caches, describe image from fallback cache
For each test case, checked that the following occur as expected and only when expected:
[x] create cache file when file does not exist
[x] skip API call after cache hit
[x] do API call after cache miss
[x] write to cache file after API call
All tests performed with Claude 3 Haiku and NVDA+Shift+Y.
What type of PR is this?
Description of changes
GPT4
,LlamaCPP
, ...) to a new description service wrapper class:CachedDescriptionService
. Then wrapped each model's description service in theCachedDescriptionService
. This fixed a bug where the cache was only updated by OpenAI description services.Related issues
closes #32
How was this tested?
Manual testing!
Test cases:
For each test case, checked that the following occur as expected and only when expected:
All tests performed with Claude 3 Haiku and NVDA+Shift+Y.