Closed dmitryvk closed 4 years ago
assumes that NetworkStream.ReadAsync will return synchronously completed task
Where does it assume that?
On line 188:
return _readAheadTask.Value.IsCompleted; // equivalent to polling
I.e., when _readAheadTask is first created the only possibility for IsCompleted to be true is to have ReadAsync return completed task.
EnsureReadAheadAndPollRead is actually called when connection pool is looking for usable connections - this is too late to start readahead tasks (they should be queued in HttpConnectionPool.ReturnConnection in order to be useful)
when _readAheadTask is first created the only possibility for IsCompleted to be true is to have ReadAsync return completed task
That doesn't mean it assumes it completes synchronously: that means it's checking to see whether it's already completed by that point (it may have completed synchronously, or it may have completed asynchronously by the time we check). Note, too, that the code above is lazily initializing the task... it could have been created on a previous call.
they should be queued in HttpConnectionPool.ReturnConnection in order to be useful
We used to do that, and we changed it. The problem with doing it then is you end up having this extra read sitting around, which can result in exceptions should the connection get closed, in particular when the stream is something higher-level than a NetworkStream, like SslStream, which will throw if it doesn't get back all the data it expects.
which manifests as HttpRequestException from HttpClient.PostAsync due to TCP RST when trying to write into closed TCP connection
There is nothing we can do to avoid a race condition that occurs should the remote endpoint choose to close the connection while we're trying to reuse it: that is simply the nature of the HTTP protocol and connection reuse. That's why code like this exists: https://github.com/dotnet/corefx/blob/97d9fb9d6c7e1d50f527e290ef8e18767dbc6bbf/src/System.Net.Http/src/System/Net/Http/SocketsHttpHandler/HttpConnectionPool.cs#L519-L521 where if after getting a connection from the pool we try to use it and discover we can't, we will try again with a different connection.
There is nothing we can do to avoid a race condition that occurs should the remote endpoint choose to close the connection while we're trying to reuse it
In this case (from mono#17710), there is no race condition - the connection is definitely deterministically closed a long time ago before we began searching for usable pooled connections (in my reproduction case there is a 0.5 second delay which can be arbitrarily increased).
That's why code like this exists:
In my case, I see that HttpClient retries GET requests, but for POST this does not apply - in debugger I see that connection.CanRetry = false
.
HttpRequestException happens inside SendRequestContentAsync -> FlushAsync: https://github.com/dotnet/corefx/blob/97d9fb9d6c7e1d50f527e290ef8e18767dbc6bbf/src/System.Net.Http/src/System/Net/Http/SocketsHttpHandler/HttpConnection.cs#L788. This is the first time TCP packet is sent to server - there is no earlier chance to detect network error.
We cannot restart any POST requests because they're not idempotent (see section 9.1.2 of RFC 2616). After sending any packets to the server, there is no way of knowing whether or not it has received and acted upon them, so the only thing we can do is to abort with an error.
The remote closing the connection could in theory be detected by a read or poll - but there could still be race conditions.
One could also argue that the "steps to reproduce" from https://github.com/mono/mono/issues/17710#issue-517743087 describes something that is in some way "broken by design" - like, why would you want to set the Keep-Alive to 1 second when you want to reuse the connection? That doesn't really make any sense. There is of course the possibility that this is simply a test case to demonstrate a race condition that might happen at any time.
Is this a Mono-only issue or does it affect .Net Core? And if it is Mono-only, then we should investigate whether it's due to some implementation detail - or maybe some general problem that might also happen in .NET Core.
@baulig where does this impact restarting requests?
Not sure - all idempotent requests (GET, HEAD, PUT and DELETE) should be restarted via that HttpConnectionPool
loop.
I am also not really sure whether the OP was reporting a race condition (since we're speaking of a half second here) or something that would happen each time the remote closes the connection.
I've just spent some time investigating this in Mono and this is not a race condition, it fails deterministically every single time.
There is a slight implementation detail in the Sockets code: even if the connection has already been closed a long time ago, it looks like Mono's implementation of ReadAsync()
does the actual reading on the ThreadPool, and therefore always returns a Task object.
The first time EnsureReadAheadAndPollRead()
is called, this will lazy-init the ReadAsync()
and the subsequent Value.IsCompleted
will be false - strictly speaking, this is a race condition, but in reality it will always fail because there won't be enough time between those to lines for the ThreadPool to run.
@stephentoub Is there any reason why we're not calling PollRead()
instead of EnsureReadAheadAndPollRead()
here:
https://github.com/dotnet/corefx/blob/97d9fb9d6c7e1d50f527e290ef8e18767dbc6bbf/src/System.Net.Http/src/System/Net/Http/SocketsHttpHandler/HttpConnectionPool.cs#L298
I increased the timeout in the test case https://github.com/mono/mono/issues/17710 and the problem goes away with that change, whereas it consistently fails otherwise.
@stephentoub Is there any reason why we're not calling PollRead() instead of EnsureReadAheadAndPollRead() here:
Yes. We're trying to get a connection from the pool that is going to be used immediately to issue another request. We want to test to make sure it's still open, and as you suggest we could do that with PollRead. But we also know that we're about to issue a request, which means we're going to need to read from it. Rather than pay for two syscalls, one to poll and one to read, we can coalesce them into a single read.
this is not a race condition
The fact that increasing the timeout fixes things suggests otherwise :wink:
Seriously, though, the race condition I was referring to is definitely a race condition that is present in any implementation of an HTTP/1.1 client that pools connections. There is always going to be the possibility that a connection that appears to be open is then closed by the server between the time that the connection is checked for validity and the time that the implementation actually issues the request on it; it's simply the nature of the system.
But, yes, it could be worsened for our implementation if the ReadAsync we issue doesn't complete synchronously or very quickly for an already closed connection, since we do depend on that in the implementation in the line you highlight.
Oh sorry, it looks like my last comment was a bit misleading; with increasing the timeout I was referring to the time between closing the connection and attempting the next request - I increased that to 5 seconds, keeping the Keep-Alive at 0.5 seconds. The same thing happens when I add a Console.ReadLine()
and check with netstat
on the console that the connection has actually been closed. So it's a reliable failure.
I will try to write a xUnit test case for this after lunch, so we don't need to use ngix to test it.
Do you have any suggestions how to fix that or would you prefer us to make this a Mono-specific thing in our fork?
So it's a reliable failure.
I think we just have different definitions of "race condition" :)
Do you have any suggestions how to fix that or would you prefer us to make this a Mono-specific thing in our fork?
The only fixes I can think of right now are:
I'd like to avoid (1) in the dotnet/runtime implementation unless it can be proven it has zero measurable impact on throughput.
Can mono/mono's implementation of Sockets be "fixed"?
If not, then making the change for (1) in mono/mono's fork is probably best. Presumably it already drifts from the dotnet/runtime implementation in other ways.
Yes. We're trying to get a connection from the pool that is going to be used immediately to issue another request. We want to test to make sure it's still open, and as you suggest we could do that with PollRead. But we also know that we're about to issue a request, which means we're going to need to read from it. Rather than pay for two syscalls, one to poll and one to read, we can coalesce them into a single read.
Please correct me if I'm wrong, but: 1) ReadAsync(0) invokes the syscall whose result we inspect before even writing to socket. I don't see how it would coalesce the subsequent read. 2) We will probably issue multiple of those syscalls since we have to loop through connection pool 3) These syscalls happen on "hot"-path at "startup" time 4) This relies on unspecified behavior for correctness, not just as an optimization
I think that much better option is to have just one syscall per HTTP message at "dispose" time. This ensure that there is just one syscall per message, not multiple. This syscall can even be offloaded to background thread to minimize latency.
(From reliability standpoint, I believe that an even better strategy is to have RDHUP in epoll wait set for a socket; but I'm not sure whether current API would allow that)
ReadAsync(0) invokes the syscall whose result we inspect before even writing to socket. I don't see how it would coalesce the subsequent read.
It's more than just the syscall itself, but everything from the ReadAsync call down. We have to make that ReadAsync call regardless, so do we want to invoke ReadAsync, or ReadAsync and poll. Plus, regardless of whether we issue the ReadAsync prior to writing anything or immediately after writing the headers, there's a really good chance we won't have received any of the response yet... so the poll generally will be an extra syscall.
We will probably issue multiple of those syscalls since we have to loop through connection pool
It is not common to have lots of stale connections in the pool, because there's a watchdog that goes through and cleans it out.
This relies on unspecified behavior for correctness, not just as an optimization
The behavior in a component in the system is relying on the behavior in a component in the same system. If I take the steering wheel from my car and try to use it with zero modifications in yours, it's not going to work correctly.
Also, even if we did change the call site @baulig suggests to use PollRead
instead, for https connections and others where we didn't have direct access to the Socket
, we couldn't actually do a poll on the socket anyway, and it would fall back to doing the exact same read.
I'll have a look at how difficult it'd be to adjust our Sockets implementation accordingly. To make this easier to debug and test, I've started doing some preliminary cleanups that will make our System.Net.Http
tests match the CoreFX version more closely.
Triage: We believe this is problem in Mono Desktop (not .NET 5, but mono/mono repo) -- it should be fixed there as suggested above in https://github.com/dotnet/runtime/issues/31399#issuecomment-565200477
(this a cross-post from https://github.com/mono/mono/issues/17710 because HttpClient code is in corefx repo)
HttpConnection.EnsureReadAheadAndPollRead method (https://github.com/dotnet/corefx/blob/97d9fb9d6c7e1d50f527e290ef8e18767dbc6bbf/src/System.Net.Http/src/System/Net/Http/SocketsHttpHandler/HttpConnection.cs#L170) assumes that NetworkStream.ReadAsync will return synchronously completed task if there is already data to be received (or is socket is shut down).
But this is not guaranteed (since Streams are not specified to return synchronously completed tasks) and such assumptions lead to reuse of already closed sockets (which manifests as HttpRequestException from HttpClient.PostAsync due to TCP RST when trying to write into closed TCP connection).