I understand that the problem I'm describing isn't a problem for LayeredCache's intended use (as described in readme).
But I had an idea that I could use ccache as
1 big cache for my entire application where I could resize the cache when I got into low memory situations
and use the LayeredCache to partition my cache into functional parts (which I could wipe out with cache.DeleteAll("somepartition") etc.)
But with the above I would only have a handful of primary keys, so each partition will end up in its own bucket, and I, not surprisingly, see lots of lock contention in the profiler.
I can certainly simplify my setup to not use the LayeredCache, but it would be convenient, and In my head this should be fixed if the bucket method below considered both keys in the hash:
I understand that the problem I'm describing isn't a problem for
LayeredCache
's intended use (as described in readme).But I had an idea that I could use
ccache
asLayeredCache
to partition my cache into functional parts (which I could wipe out withcache.DeleteAll("somepartition")
etc.)But with the above I would only have a handful of primary keys, so each partition will end up in its own bucket, and I, not surprisingly, see lots of lock contention in the profiler.
I can certainly simplify my setup to not use the
LayeredCache
, but it would be convenient, and In my head this should be fixed if thebucket
method below considered both keys in the hash: