Closed prat0088 closed 4 years ago
Hi!
Blocking: yes, understand the need. Working on it, as part of a bunch of transport level changes. On the "lower level objects": I'm working in that too, but you'd probably be better off waiting for the SE-Redis bits.
As for NRediSearch: none of the core parts of RediSearch pieces block the client that I'm aware of. Can you be more specific?
BTW I'll probably try and put the pooling pieces into the new layer, not into SE-Redis directly (it'll just consume it). What this means for you is that it might be available more quickly, but you'll have to use a slightly different API. It won't be tricky, though, and I can guide you through it (as part of documentation) as it approaches readiness. It is moving at a good pace.
BTW I'll probably try and put the pooling pieces into the new layer, not into SE-Redis directly (it'll just consume it). What this means for you is that it might be available more quickly, but you'll have to use a slightly different API. It won't be tricky, though, and I can guide you through it (as part of documentation) as it approaches readiness. It is moving at a good pace.
That all sounds fantastic!
Question: what framework(s) are you targeting?
As for NRediSearch: none of the core parts of RediSearch pieces block the client that I'm aware of. Can you be more specific?
From my understanding of this article, search commands block the client, dispatch the request to a thread pool, then return, leaving the client blocked. The search search is performed in the background thread and finally returns data to the blocked client.
I just jumped into the code and I'm not sure my understanding is correct. It's evening and I need a fresh mind to really dig.
@mgravell .NET Core 3.x on Linux. We are on 3.0 now. We're building micro services and usually have some flexibility with what we target.
That said, we are are writing data-intense services and very memory-conscious. We use ArrayPool, RecyclableMemoryStream, and newer features like Spans and Memory every chance we get to reduce GC pressure and LOH fragmentation.
Right; that article is mostly talking about the ability for one slow command to not block the entire server, but instead to allow concurrency.
Will a slow command still effectively block the client connection? Yes. But that's no different to any other slow command. In reality, most search ops are blazingly fast, and even the "slow" ones are simply fast rather than blazingly fast. But yes, if a search op took long enough to impact timeouts or heartbeats, it would still get ugly.
What we're really talking about when discussing blocking ops is open ended ops - things that might take seconds, minutes, hours - not because it is doing something, but because the connection has been placed on hold waiting for a kick. The blocking list pops, for example.
open ended ops - things that might take seconds, minutes, hours - not because it is doing something, but because the connection has been placed on hold waiting for a kick
Speaking for my own module, that's the plan. I plan to block the client, process the request on a background thread, then return data. The custom module commands I want to query with SE.Redis is a read-through cache to a slower data source that can frequently take 50ms - 20 seconds to return. I expect to have hundreds of connections to Redis open at any given time, with no reason it couldn't increase to thousands. It's tricky because you can't estimate how long a connection will take until you issue the command and it checks the cache. I suppose I could return immediately with an ID and have the client requires, or do something elaborate with streams, but that adds complexity.
Yep; this won't work well under the existing code, but: plans are afoot
On Sun, 16 Feb 2020 at 16:24, Tristan Pratt notifications@github.com wrote:
open ended ops - things that might take seconds, minutes, hours - not because it is doing something, but because the connection has been placed on hold waiting for a kick
Speaking for my own module, that's the plan. The module I want to query from SE.Redis is a read-through cache to a slower data source that can frequently take 50ms - 20 seconds to return.
— You are receiving this because you were mentioned. Reply to this email directly, view it on GitHub https://github.com/StackExchange/StackExchange.Redis/issues/1352?email_source=notifications&email_token=AAAEHMDXL4HB5MOGDSK5MCTRDFSFLA5CNFSM4KV4PUJKYY3PNVWWK3TUL52HS4DFVREXG43VMVBW63LNMVXHJKTDN5WW2ZLOORPWSZGOEL4LO7A#issuecomment-586725244, or unsubscribe https://github.com/notifications/unsubscribe-auth/AAAEHMA3GW72GQGQZTBFY33RDFSFLANCNFSM4KV4PUJA .
-- Regards,
Marc
@prat0088 going through issues to do a cleanup pass - are we good here on plans? Just trying to tidy :)
@prat0088 going through issues to do a cleanup pass - are we good here on plans? Just trying to tidy :)
Yep, thanks!
@mgravell
I'm in the early stages of writing a redis module that blocks, does work on a background thread, and returns. From my understanding so far this won't work with StackExchange.Redis because of its clearly-stated lack of support for blocking commands.
I also noticed your comment on Twitter about potentially big refactors coming to StackExchange.Redis. Anything you could do to make it possible to call blocking commands would be extremely useful to me. Even if it meant constructing my own pool from lower-level objects.
According to another issue in this repo I can't even construct my own connection pool because if I issue a blocking command heartbeats will be missed.
By the way, how does NRedisSearch work? I presume it uses the RedisSearch module, and that's a multi-threaded module that blocks the client.