Open janosg opened 3 months ago
Now that we only use joblib for the parallelization of data valuation algorithms we could also leverage its caching mechanism through the Memory class and maybe only offer one extension to support caching in a distributed setting.
I tried using it when I refactored the caching backends and couldn't really make it work with memcached because it is implemented as a file-based caching. So I gave up on basing our code on it but I still took heavy inspiration from their interface so perhaps we could consider it again.
The new design of data valuation methods avoids repeated computations of the utility function without relying on caching. We could therefore get rid of our current caching implementation based on memcached, which seems overpowered. This would close several issues related to caching (e.g. #517, #475, #464 and #459). Moreover, it could solve problems that arise due to the many files the current caching solution creates.
The only situation where caching ist still really important is when one benchmarks multiple algorithms and wants to use caching to ensure that randomness is kept as constant as possible between different algorithms and to save runtime in the benchmark. We therefore should create an entry point for benchmarking frameworks to enable caching. I see two possible solutions:
cache_backend
abstraction in the Utility but only implement a much simpler shared-memory backend in pydvl. Users with advanced caching needs could then build their own backends.