Closed padraic-shafer closed 1 year ago
Needs test/reproducer.
OK, I'll add these tests during the next few days.
I've added two new tests to "tests/acceptance/test_decorators.py::TestCached" in the test-namespace-key
branch of this fork.
test_cached_without_namespace
passes because the cache key is created with the default format when no namespace is provided.test_cached_with_namespace
fails because the default key for a decorated function/coroutine does not include the namespace.pytest tests/acceptance/test_decorators.py::TestCached
The other changes included in https://github.com/aio-libs/aiocache/pull/570 are related to using cache keys that are not strings (notably tuple
keys), so I will suggest those separately in a new issue.
@Dreamsorcerer if you agree that the ~tests above are~ test above is worth resolving, I'll submit a new PR with just those changes to "decorators.py" and their corresponding tests.
Tests look reasonable, go ahead and complete the PR.
The other changes included in #570 are related to using cache keys that are not strings (notably
tuple
keys), so I will suggest those separately in a new issue.
I was under the impression that cache keys are expected to be strings always..
Tests look reasonable, go ahead and complete the PR.
Thanks. Will do.
I was under the impression that cache keys are expected to be strings always..
Yeah, in retrospect I'm seeing that now more than when I started using aiocache
some months ago. FWIW, Using non-string keys works quite well with the existing package when using a custom key_builder
and the key_filter
that I suggested in https://github.com/aio-libs/aiocache/pull/570.
...but I only use the SimpleMemory
cache at the moment; not redis
or memcached
.
Yeah, I suspect it will work in some/many cases, but I don't think it's something we want to actively support. The code seems to assume str keys in many places and I think we'll keep typing support simple if we stick to str.
I've updated the tests to correctly pass the namespace to cache.exists()
. This is consistent with cached.get_cache_key()
returning a "raw" key without a namespace as prefix.
I needed to update the tests again to use the namespace from the cache fixtures. Otherwise these always overwrite the arbitrary namespace I originally chose for the test.
After reviewing the motivation for submitting this issue, I think it might be helpful to break this into multiple pieces.
Observation: the key_builder
argument to BaseCache
and its subclasses joins a namespace and a "raw" key; whereas the key_builder
argument to decorator classes builds a "raw" key from the callable and its arguments when it is called.
Using a custom key_builder
with a cache would be better supported after some simple updates:
a. Include examples of using a custom key_builder
when creating a cache.
b. BaseCache._build_key
uses namespace
argument if provided, otherwise it uses self.namespace
. A custom key_builder
breaks this expectation because it cannot access self.namespace
-- there is no self
for the externally provided key_builder
.
BaseCache
public members that accept a namespace parameter (add
/set
/get
/etc.) should pass self.namespace
to their private equivalents (add
/set
/get
/etc.) if the namespace
argument is None
.BaseCache.build_key
, regardless of whether it delegates to _build_key
or key_builder
;client.build_key
rather than client._build_key
.Clashes between cache key_builder
and decorator class key_builder
.
a. Decorators cannot be used with ad hoc caches that need a custom key_builder
. The decorator class key_builder
parameter shadows the parameter with the same name that would have otherwise been passed to the underlying cache via **kwargs
.
[multi_]cached.__init__
that enables a user to pass a key builder to the underlying cache. For example, cache_key_builder
. In general **kwargs
could be filtered to pop any keyword arguments that start with "cache_", strip the prefix and pass it to the underlying cache.namespace
an explicit parameter to decorator caches, to better indicate that the namespace inclusion does not happen in the key_builder
argument passed to the decorator. Instead, this is performed by the underlying cache, in self.cache.build_key()
.A custom key_builder
passed to a decorator class has no access to self.noself
and so must hardcode or introspect the decorated callable to determine whether to use the first argument (self
) in the key that is generated.
key_builder
should include a noself
parameter that could be passed explicitly by the user or implicitly by decorator.get_cache_key()
a la Item 2b (above).It would be helpful to issue a warning when a decorator class is instantiated with both alias
and other parameters that do not get used because "alias takes precedence".
Included in the original issue here was an enhancement to use namespace keys without requiring ns_key.startswith(namespace)
. I proposed a key_filter(ns_key, namespace)
member for BaseCache and _key_filter(ns_key, namespace)
for backends that returns True
if ns_key
is in the namespace, namespace
; otherwise it returns False
. This would be used by clear()/_clear()
and other cache key inspections.
The issue with plugins.HitMissRatioPlugin
that was originally referenced here was due to an old version that did not use **kwargs
in post_get()
. It was resolved by #507.
All changes to the test suite were included in the test-namespace-key
branch of this fork before merging them into the PR. This makes it simple to run the tests on the codebase from the main branch.
I think this mostly resolved now, with improvements to come in PRs for v1.
Feel free to open new issues to track specific problems, but I suspect you'll cover it all in another PR anyway.
There are some parts of
aiocache
that do not honornamespace
s. This means thatBaseCache.build_key(key, namespace)
is never called to join thenamespace
with thekey
. The affected modules/classes are:decorators.cached
plugins.HitMissRatioPlugin
backends.memory.SimpleMemoryCache
This affects sharing an
alias
ed cache across multiple@cached
-decorated callables. This currently works only whennamespace==None
.A PR will be submitted shortly.