Open tippmar opened 5 years ago
There is currently no locking mechanism on keys, no. So yeah, that's the expected behavior if you run multiple threads for the same key, you might get collisions.
That being said, I'm planning to add lock helpers and some functionality in the future which will help with that.
Any suggestions for a short-term workaround? Or a timeline on adding the locking semantics?
As long as you don't use distributed caching, it is relatively easy.
You can cache SemaphoreSlim
s per key. Lock on that and release and remove the semaphore afterwards.
If you do use distributed caching, you would have to use a distributed lock mechanism which can get way more complicated.
Just to support with missing feature with test case:
[Fact]
public void TestConcurrentGetOrAddSameKey()
{
var cache = CacheFactory.Build<object>(settings => settings
.WithMicrosoftMemoryCacheHandle()
.And
//.WithUpdateMode(CacheUpdateMode.Up)
.WithGzJsonSerializer()
);
CacheItem<object> GenerateValue(string key)
{
Thread.Sleep(5000);
return new CacheItem<object>(key, $"{rnd.Next()} {DateTime.Now.ToLongTimeString()} : HALLO WORLD FOR " + key);
}
object v1 = null;
object v2 = null;
const string sameKey = "Test 1 ";
Parallel.Invoke(
() => { v1 = cache.GetOrAdd(sameKey, GenerateValue).Value; },
() => { v2 = cache.GetOrAdd(sameKey, GenerateValue).Value; }
);
Assert.True(v1 == v2); // FAILS
}
@MichaCo shouldn't it be ReaderWriterLockSlim
rather than SemaphoreSlim
?
In case anyone else comes upon this issue and wants a workaround, I've built a set of extension methods that provide synchronized behavior for .GetOrAdd()
-- the only added requirement for using the extensions is that you have to pass in the synchronization object you want to use - typically, a statically declared Object() instance, or, in the case of the async extension, a statically declared AsyncLock()
instance (which requires installation of the Nito.AsyncEx package).
Hope this helps someone....
public static class ICacheManagerExtensions
{
/// <summary>
/// A synchronized version of <see cref="ICacheManager{TCacheValue}>"/> GetOrAdd that guarantees that the valueFactory function
/// will be invoked exactly one time, blocking all other simultaneous callers until the factory func is complete.
///
/// </summary>
/// <typeparam name="T"></typeparam>
/// <param name="cacheManager"></param>
/// <param name="lockObject">a static <see cref="Object"/> instance, declared at the same scope as cacheManager, that will be used to synchronize access to the cache manager</param>
/// <param name="cacheKey"></param>
/// <param name="valueFactory"></param>
/// <returns></returns>
public static T SynchonrizedGetOrAdd<T>(this ICacheManager<T> cacheManager, object lockObject, string cacheKey, Func<string, T> valueFactory)
{
if (!cacheManager.Exists(cacheKey))
{
lock (lockObject)
{
if (!cacheManager.Exists(cacheKey))
{
var value = valueFactory(cacheKey);
cacheManager.Add(cacheKey, value);
return value;
}
}
}
return cacheManager.Get(cacheKey);
}
/// <summary>
/// A synchronized, async version of <see cref="ICacheManager{TCacheValue}>"/> GetOrAdd that guarantees that the async valueFactory function
/// will be invoked exactly one time, blocking all other simultaneous callers until the factory func is complete.
///
/// </summary>
/// <typeparam name="T"></typeparam>
/// <param name="cacheManager"></param>
/// <param name="lockObject">a static <see cref="AsyncLock"/> instance, declared at the same scope as cacheManager, that will be used to synchronize access to the cache manager</param>
/// <param name="cacheKey"></param>
/// <param name="asyncValueFactory"></param>
/// <returns></returns>
public static async Task<T> SynchronizedGetOrAddAsync<T>(this ICacheManager<T> cacheManager, AsyncLock lockObject, string cacheKey, Func<string, Task<T>> asyncValueFactory)
{
if (!cacheManager.Exists(cacheKey))
{
using (await lockObject.LockAsync())
{
if (!cacheManager.Exists(cacheKey))
{
var value = await asyncValueFactory(cacheKey);
cacheManager.Add(cacheKey, value);
return value;
}
}
}
return cacheManager.Get(cacheKey);
}
}
I'm finding that, when multiple threads call
.GetOrAdd()
on the same emptyICacheManager<T>
instance with the same key value, that thevalueFactory
func is called multiple times, indicating an apparent lack of any synchronization within.GetOrAdd()
.Is this the correct behavior? My expectation is that the first thread to call
.GetOrAdd()
when the given key does not exist in cache would block other callers asking for the same key until thevalueFactory
func is complete and then return that newly cached item.