MichaCo / CacheManager

CacheManager is an open source caching abstraction layer for .NET written in C#. It supports various cache providers and implements many advanced features.
http://cachemanager.michaco.net
Apache License 2.0
2.33k stars 458 forks source link

Lua scripting + RedisConnectionException: SocketClosed #281

Closed macchmie3 closed 4 years ago

macchmie3 commented 4 years ago

Hi,

I have using CacheManager with app hosted on Azure. This app uses in-memory cache and Redis as backplane.

We are sometimes having following errors:

StackExchange.Redis.RedisConnectionException: No connection is available to service this operation: DEL <some cache key>; SocketClosed (ReadEndOfStream, last-recv: 0) on <cache url>/Interactive, Idle/MarkProcessed, last: EVAL, origin: ReadFromPipe, outstanding: 0, last-read: 0s ago, last-write: 24s ago, keep-alive: 60s, state: ConnectedEstablished, mgr: 9 of 10 available, in: 0, in-pipe: 0, out-pipe: 0, last-heartbeat: 0s ago, last-mbeat: 0s ago, global: 0s ago, v: 2.0.601.3402; IOCP: (Busy=3,Free=997,Min=256,Max=1000), WORKER: (Busy=95,Free=8096,Min=1024,Max=8191), Local-CPU: n/a ---> StackExchange.Redis.RedisConnectionException: SocketClosed (ReadEndOfStream, last-recv: 0) on <cache url>/Interactive, Idle/MarkProcessed, last: EVAL, origin: ReadFromPipe, outstanding: 0, last-read: 0s ago, last-write: 24s ago, keep-alive: 60s, state: ConnectedEstablished, mgr: 9 of 10 available, in: 0, in-pipe: 0, out-pipe: 0, last-heartbeat: 0s ago, last-mbeat: 0s ago, global: 0s ago, v: 2.0.601.3402

--- End of inner exception stack trace ---
   at StackExchange.Redis.ConnectionMultiplexer.ExecuteSyncImpl[T](Message message, ResultProcessor`1 processor, ServerEndPoint server) in C:\projects\stackexchange-redis\src\StackExchange.Redis\ConnectionMultiplexer.cs:line 2237
   at StackExchange.Redis.RedisBase.ExecuteSync[T](Message message, ResultProcessor`1 processor, ServerEndPoint server) in C:\projects\stackexchange-redis\src\StackExchange.Redis\RedisBase.cs:line 54
   at StackExchange.Redis.RedisDatabase.KeyDelete(RedisKey key, CommandFlags flags) in C:\projects\stackexchange-redis\src\StackExchange.Redis\RedisDatabase.cs:line 590
   at CacheManager.Redis.RedisCacheHandle`1.<>c__DisplayClass53_0.<RemoveInternal>b__0()
   at CacheManager.Redis.RetryHelper.Retry[T](Func`1 retryme, Int32 timeOut, Int32 retries, ILogger logger)
   at CacheManager.Redis.RedisCacheHandle`1.Retry[T](Func`1 retryme)
   at CacheManager.Redis.RedisCacheHandle`1.RemoveInternal(String key, String region)
   at CacheManager.Core.Internal.BaseCache`1.Remove(String key, String region)
   at CacheManager.Core.BaseCacheManager`1.RemoveInternal(String key, String region)
   at CacheManager.Core.Internal.BaseCache`1.Remove(String key, String region)

I was wondering what could be the cause for it. Some of our keys may store big values(~several MB), but as seen here, this errors also happen with DEL operations.

I have one more question. What is the advantage of using Lua Scripts on Redis? Is there a way not to use them when using your library? We are sometimes seeing that the connection to redis gets stuck, beacause of higher traffic and I was wondering how we could optimize it. I was wondering if Lua Scripts take more time to execute than simply reading the cache with calls like _connection.Database.StringGet()

MichaCo commented 4 years ago

Hi @macchmie3, I doubt it has anything to do with lua scripting. Scripts just allow atomic operations which is harder to achieve just with code because every step could break and might even cause more network traffic.

The issue is most likely the large values, several MB of data on a key is a lot. Monitor your redis server, maybe upgrade it.

Maybe also take a look at this issue and what others tried to resolve a similar error condition: https://github.com/StackExchange/StackExchange.Redis/issues/1120

macchmie3 commented 4 years ago

@MichaCo Thanks for your fast answer. I have one more question - I have just tested cache using GzJsonCacheSerializer (which we didn't use before), and It turns out the cached values are ~12x-13x smaller for us. Do you know how much of an impact does the serialization has on the performance?

If I configure redis like the following:

settings.WithSerializer(typeof(GzJsonCacheSerializer))
                        .WithUpdateMode(CacheUpdateMode.Up)
                        .WithMaxRetries(int.Parse(ConfigurationManager.AppSettings["azure.cache.maxRetries"]))
                        .WithMicrosoftMemoryCacheHandle("workerInProcessCache")
                        .And
                        .WithRedisConfiguration("RedisCache", secretsManager.GetSecret("RedisConnection"))
                        .WithRedisBackplane("RedisCache")
                        .WithRedisCacheHandle("RedisCache")
                        .WithExpiration(ExpirationMode.Absolute, TimeSpan.FromMinutes(int.Parse(ConfigurationManager.AppSettings["azure.cache.expirationInMinutes"])));

Will the compressionbe applied to both layers? Is it possible to only apply compression to the redis backplane?

MichaCo commented 4 years ago

Serialization is only done for distributed caches. Even if you would configure a serializer and use an in-memory cache only, that serializer would never be invoked.

Performance for compression has an overhead which is a bit hard to predict, usually a factor of 2x-3x slower, depends on object size, number of objects etc...

Here are a couple of (old) benchmark results

You can very easily test serialization performance with your data by simply running it through both serializers.

macchmie3 commented 4 years ago

Hi, I just wanted to let others now that in my case, downgrading StackExchange.Redis to version 1.2.6 helped out. There are a lot of performance and connectivity issues with the new version according to their issues list.