Open paulgartner1 opened 1 year ago
How are you connecting, and issuing commands? How things are called is going to determine node selection, then we can help advise :)
Hi Nick.
Connection is similar to:
IConnectionMultiplexer _connectionMultiplexer = ConnectionMultiplexer.Connect("<resource_name>.redis.cache.windows.net:6380,password=<some_password>,ssl=true,abortConnect=false");
Subscribe
_connectionMultiplexer.GetSubscriber().Subscribe("channel_name", Subscription_Func)
Publish
_connectionMultiplexer.GetSubscriber().Publish("channel_name", message as byte[])
We raised a support incident with Microsoft, they have advised it appears all pub/sub messages are going to a single shard.
Just so I understand the scenario: this is measuring distribution between shards, not distribution among replicas inside a single shard, is that right?
The sharding of regular keys is outside of our control - we need to issue commands to the relevant shard; however, your mention of pub/sub is intriguing - I recall there was a glitch where we were computing the pub/sub shard against a fixed value (see footnote); I thought this was fixed (and pre 2.6.80), but not you have me doubting that.
Could you try against latest? Separately, I'll review our channel shard path for both pub and sub.
Footnote: for regular "publish"/"[p]subscribe", the channel is not sharded at the server level; messages are broadcast and repeated by all primary nodes, so theoretically both producer and consumer can use any primary node arbitrarily; this seemed sub-optimal so our intent was to route channels using the same logic as we use for keys, in the same way used by "spublish"/"ssubscribe" (which didn't exist when we did this originally). It is possible we have regressed there.
On Thu, 2 Feb 2023, 03:55 paulgartner1, @.***> wrote:
Since upgrading our project from StackExchange.Redis version 1.2.6 to 2.6.80, we have noticed that reads are unbalanced between the 2 nodes in the shard set. We are using Azure Cache for Redis with 2 shards in the cluster.
Prior to the update there was approximately 10%-20% difference between reads across the shards, however the difference is now 6-7X with one shard taking the majority of the load. Most metrics are balanced across the 2 shards: connections, cache hits & misses, gets & sets, used memory, total keys.
Metrics including: Total operations, CPU, Server Load, and Reads, all exhibit significant difference across the 2 shard nodes.
We do make heavy use of pub/sub with a single channel.
[image: image] https://user-images.githubusercontent.com/40531962/216227717-e61af311-0c1b-46fe-9f12-84c4d0fe3eb5.png Upgrade occurred on Jan 30.
— Reply to this email directly, view it on GitHub https://github.com/StackExchange/StackExchange.Redis/issues/2359, or unsubscribe https://github.com/notifications/unsubscribe-auth/AAAEHMDD7YB3GACAL2SZMZDWVMVZVANCNFSM6AAAAAAUOQLAP4 . You are receiving this because you are subscribed to this thread.Message ID: @.***>
@mgravell - Correct this is measuring the distribution between shards.
We can try updating to the latest, however ideally there would be an identified change to the pub/sub distribution logic that would impact the behavior.
In the example above, is the channel name always the same? If so, that was an explicit change (fix) since 2.5.27 to properly route channels to a predictable (hash slot) node. If everyone is on one channel, it would be expected to traverse 1 node in the current release. If that's your case, the sharding was actually an unpredictable accident in key hashing, rather than intentional (and you may not be getting all messages because of it).
I hadn't considered it, but Nick is right : if you're only using a single channel, this change in behaviour is the fix to the error I described earlier. So knowing whether you're using one channel or many would be helpful. Again, this is for SSUBSCRIBE-style routing, even without SSUBSCRIBE. The idea here is similar to cluster itself: if you do everything under one key (a large hash for example), then only one server will be dealing with that load.
Correct, we are using a single channel. Thanks for the clarification.
@paulgartner1 clarification: are you observing an actual issue with this? Some kind of performance degredation for example? If you are, there may be options for us doing something more... Intentionally random (deliberately) for channel routing. But we'd need to design for it.
@mgravell - We are currently experiencing an issue, one shard is now experiencing very high load and needs to be scaled up to handle. Overall the capacity is not utilized, as the other shard under utilized. It would be great to more evenly distribute the load and utilize both shards.
Reopening so we can at least track and think of options.
Thinking aloud: the main thing I can think here is some new command-flag to disable channel routing, instead using arbitrary routing (presumably randomised in the hope that some approximately proportional set of clients would use each node)
Other option I'm thinking: use multiple channels chosen intentionally to shard separately; send messages to all, and pick the channel randomly on a per-client basis
The second option seems ugly, hard to do (picking channels with specific shards in mind), and awkward, but would at least work promptly.
I'm not sure the second has to be very elegant FWIW, I was thinking an ever-incrementing modulus indexing into an array of channel keys (to reduce allocs) and the subscriber can sub to "base-*". Publisher is rotating through n
, could be say 100 or 1000, but subscriber gets any via wildcard.
For this massive case - quick fix?
I don't think you can solve this by subscribing to base-*
; it is the subscribers that are creating the load here; if they all subscribe using today's logic, they're all going to end up on the same node; all you're adding is that some publishers go direct to the correct server, and some others add a broadcast hop before they get to the server hosting base-*
ironically, the feature we want is wildcard publish: "publish to any channels with subscribers that you know about that, that match the pattern base-*
" - and PPUBLISH
: not a thing
We have managed to work around this issue, our specific use case provided an opportunity to aggregate multiple publish messages into a single message, doing so has reduced pub/sub load on the shard considerably.
However changes to better support and distribute high volume single channel pub/sub in a shard set would be beneficial.
Since upgrading our project from
StackExchange.Redis
version 1.2.6 to 2.6.80, we have noticed that reads are unbalanced between the 2 nodes in the shard set. We are using Azure Cache for Redis with 2 shards in the cluster.Prior to the update there was approximately 10%-20% difference between reads across the shards, however the difference is now 6-7X with one shard taking the majority of the load. Most metrics are balanced across the 2 shards: connections, cache hits & misses, gets & sets, used memory, total keys.
Metrics including: Total operations, CPU, Server Load, and Reads, all exhibit significant difference across the 2 shard nodes.
We do make heavy use of pub/sub with a single channel.
Upgrade occurred on Jan 30.