Closed LckyLke closed 2 weeks ago
Thank you. How can we disable it if we want ? Maybe as shown below ?
if enable_caching:
self.predict = self._create_cached_predict()
Yes, I was thinking of conditionally applying the wrapper like so:
if self._max_cache_size:
predict = lru_cache(maxsize=self._max_cache_size)(predict)
self.predict = predict
And remove it from the top of the method ofc
You forgot the else condition, didn't you ?
No if caching is enabled the predict gets overwritten - otherwise it is just the function - but I wrote this at my phone rn so I haven't tested it xd
from functools import lru_cache
from typing import List, Tuple
def __init__(self, max_cache_size=None):
self._max_cache_size = max_cache_size
self.predict = self._create_predict_method()
def _create_predict_method(self):
self_ref = self
def predict(h: str = None, r: str = None, t: str = None) -> List[Tuple[str, float]]:
pass
# Apply caching if max_cache_size is not zero or None
if self._max_cache_size:
predict = lru_cache(maxsize=self._max_cache_size)(predict)
return predict
so like this
I don't think that we would needing _create_predict_method
. What do you think about the following code ?
def __init__(self, max_cache_size:int=None):
self._max_cache_size = max_cache_size
if isinstance(max_cache_size,int) and max_cache_size>0:
self.predict=lru_cache(maxsize=max_cache_size)(self.predict)
else:
# don't do anything since self.predict is already defined
Yes, this should also work - should I implement it and create a PR?
Yes please
We can use a LRU caching wrapper from functools - to make the cache size dynamic and based on the current object we have to create the function using the wrapper on runtime. This could look as follows:
Runtimes get better (e.g. family dataset):
with caching:
5856/5856 [00:10<00:00, 540.20it/s]
without:5856/5856 [03:23<00:00, 28.83it/s]