Open daniel-luper opened 1 week ago
Hey @daniel-luper,
Great catch, and your solution seems well thought out. I especially like the idea of a caching decorator for the sake of cleanliness. Ideally, the current cache would be preserved as a fallback so that users don't have to delete it upon upgrading to the new version. That said, I don't think it's mandatory based on the number of people using this feature.
Feel free to start working on this. If you'd like help along the way, don't hesitate to reach out!
Thanks! I'll go ahead and get started on it.
@cartertemm how would you recommend reading and writing logs? I tried using the logger from import logging
and checked the %TMP%/nvda.log
file, but my logs didn't get written. I already made sure to set the NVDA logging level from the General settings to "debug".
You can always use NVDA's logHandler, as such:
import logHandler
nvda_log = logHandler.log
nvda_log.info("Test msg")
HTH
Hi! Love this project btw. I'm wanting to contribute, and while exploring the add-on I found this bug (maybe feature š?? I guess some people might want a unified cache across all models but it's not what I would expect).
Description
DescriptionService
X processes an image that hasn't been cached by X, then the image/description pair gets added to X's cache. When the image has been cached by X (not Y, otherwise you wouldn't have a way of generating a different description using a different model), then the cached description gets returned.Root Cause
Anthropic
andLlamaCPP
description services:Possible Solution
To improve readability and cohesiveness while eliminating duplicate code, move the caching code out of the description services and into a new class (e.g.
CachedDescriptionService
). The new class is aDescriptionService
that can wrap any otherDescriptionService
Y, and adds caching functionality to Y (see Decorator pattern).Then, use separate files for seperate models instead of one unified cache across all models. This would have the added benefit of helping prevent the memory error in #26.
I can implement this and make a PR. What do you think? Do you have any questions?