Closed macchmie3 closed 4 years ago
Hi @macchmie3, I doubt it has anything to do with lua scripting. Scripts just allow atomic operations which is harder to achieve just with code because every step could break and might even cause more network traffic.
The issue is most likely the large values, several MB of data on a key is a lot. Monitor your redis server, maybe upgrade it.
Maybe also take a look at this issue and what others tried to resolve a similar error condition: https://github.com/StackExchange/StackExchange.Redis/issues/1120
@MichaCo Thanks for your fast answer. I have one more question - I have just tested cache using GzJsonCacheSerializer (which we didn't use before), and It turns out the cached values are ~12x-13x smaller for us. Do you know how much of an impact does the serialization has on the performance?
If I configure redis like the following:
settings.WithSerializer(typeof(GzJsonCacheSerializer))
.WithUpdateMode(CacheUpdateMode.Up)
.WithMaxRetries(int.Parse(ConfigurationManager.AppSettings["azure.cache.maxRetries"]))
.WithMicrosoftMemoryCacheHandle("workerInProcessCache")
.And
.WithRedisConfiguration("RedisCache", secretsManager.GetSecret("RedisConnection"))
.WithRedisBackplane("RedisCache")
.WithRedisCacheHandle("RedisCache")
.WithExpiration(ExpirationMode.Absolute, TimeSpan.FromMinutes(int.Parse(ConfigurationManager.AppSettings["azure.cache.expirationInMinutes"])));
Will the compressionbe applied to both layers? Is it possible to only apply compression to the redis backplane?
Serialization is only done for distributed caches. Even if you would configure a serializer and use an in-memory cache only, that serializer would never be invoked.
Performance for compression has an overhead which is a bit hard to predict, usually a factor of 2x-3x slower, depends on object size, number of objects etc...
Here are a couple of (old) benchmark results
You can very easily test serialization performance with your data by simply running it through both serializers.
Hi, I just wanted to let others now that in my case, downgrading StackExchange.Redis to version 1.2.6 helped out. There are a lot of performance and connectivity issues with the new version according to their issues list.
Hi,
I have using CacheManager with app hosted on Azure. This app uses in-memory cache and Redis as backplane.
We are sometimes having following errors:
StackExchange.Redis.RedisConnectionException: No connection is available to service this operation: DEL <some cache key>; SocketClosed (ReadEndOfStream, last-recv: 0) on <cache url>/Interactive, Idle/MarkProcessed, last: EVAL, origin: ReadFromPipe, outstanding: 0, last-read: 0s ago, last-write: 24s ago, keep-alive: 60s, state: ConnectedEstablished, mgr: 9 of 10 available, in: 0, in-pipe: 0, out-pipe: 0, last-heartbeat: 0s ago, last-mbeat: 0s ago, global: 0s ago, v: 2.0.601.3402; IOCP: (Busy=3,Free=997,Min=256,Max=1000), WORKER: (Busy=95,Free=8096,Min=1024,Max=8191), Local-CPU: n/a ---> StackExchange.Redis.RedisConnectionException: SocketClosed (ReadEndOfStream, last-recv: 0) on <cache url>/Interactive, Idle/MarkProcessed, last: EVAL, origin: ReadFromPipe, outstanding: 0, last-read: 0s ago, last-write: 24s ago, keep-alive: 60s, state: ConnectedEstablished, mgr: 9 of 10 available, in: 0, in-pipe: 0, out-pipe: 0, last-heartbeat: 0s ago, last-mbeat: 0s ago, global: 0s ago, v: 2.0.601.3402
I was wondering what could be the cause for it. Some of our keys may store big values(~several MB), but as seen here, this errors also happen with DEL operations.
I have one more question. What is the advantage of using Lua Scripts on Redis? Is there a way not to use them when using your library? We are sometimes seeing that the connection to redis gets stuck, beacause of higher traffic and I was wondering how we could optimize it. I was wondering if Lua Scripts take more time to execute than simply reading the cache with calls like _connection.Database.StringGet()