Open Lukasa opened 10 years ago
In more than 5 years i have seen all of the bad things happening by calling read() without a limit (and worst with readline()). Lot of users are not aware of the implications. Personally i would totally remove this "feature", but i understand that this would be a very invasive move.
So, this is a DOS vector today - open a connection, say you're uploading, then send 1 byte every 29 minutes to keep TCP alive. Or open a connection and pipe in /dev/null (with TE for added pain).
I think I already committed the change for CONTENT_LENGTH enforcement being a must from the WSGI2 nodes PJE posted... but missing content length - chunked in 1.1, just a bunch of frames in 2 - is a standard HTTP feature I think we need to support.
Tempting as it is, if we remove read(), users will just do: content = [] segment = read(65536) while segment: content.append(segment) segment = read(65536)
So if we're going to make things materially better we need to do so in the plumbing. It does make sense to me that we might remove unsized read at the same time... see below.
There are two separate attacks:
For overloading I think we basically can copy from the ulimit concept: allow a server to have a default max size it will support, and read() will throw an error if the total content we've read in this request has exceed that size. Apps that know they really want unlimited reads(bounded or not) can configure their server to disable the limit - and we can leave that interface up to server authors. readline probably wants a similar default maximum chars in a line - or it could just depend on the read() knob to cap it. I suggest the total read, not the amount in a single call, to save apps that are reading incrementally but still buffering: as long as it can be disabled, those apps that are really not buffering can disable the protection. OTOH middleware could layer this protection, either way, over the basic interface for bounded reads, but middleware cannot provide it for unbounded reads.
For trickle streams, I think a maximum duration is indeed a good answer for the unbounded read call, but its not effective good for the bounded read calls - e.g. how long we should wait is really upload size_expected/effective_bandwidth. We can of course punt to the app to say 'you should timeout if noone is feeding you enough bytes'. If we do the short-reads-can-happen proposal, then this can be done entirely by middleware and perhaps we should recommend the use of such middleware in an associated 'best practices' PEP but not make the specification require servers to include it: preserving the original minimal focus of WSGI.
So tl;dr I think I've talked myself into proposing:
I can get behind that proposal, it looks reasonable to me.
If the protection middleware is optional then there's a large change users won't use it, thus, still having the original problem.
Still, this sounds like a server feature, making users jump through hoops (to make it harder to do vulnerable services/app) kinda defeats the purpose of having a good protocol.
Also, having a minimum rate is troublesome at best - there are so many way to circumvent that, or have it backfire on legit clients that are a bit slow. I think the best thing would be to have an overall time limit to consume the input. Either reach the end of the input in time or have a way to signal the webserver that the app is not interested in it (and that it should be discarded).
@ionelmc One of the points PJE made is that the /protocol/ needs to have as few rough edges and special cases as possible to reduce implementor bugs. Buffering all uploaded content is a special case: for websockets, bosh, http/2, http/1 chunked uploads and comet buffering all uploaded content will break chatty apps. So removing the special case is a rather big win there - it will make the protocol much more robust.
Having an overall time limit to consume the input makes sense, but it can't apply uniformly to all requests, because dynamic bi-directional content has different needs to the upload-process-reply pattern. Likewise an overall time limit to serve the request as a whole. So the question is: do we define such limits and the way to control them as part of WSGI2, or do we make sure there is enough in WSGI2 that one can define such limits on top of it in a clean way. E.g. make sure that requests can be cleanly told that they are being interrupted, whether they are in a read() call or yielding output. Such interrupts might come from the network protocol (e.g. a BOSH session termination) or from the server (e.g. 'you took too long') - and as long as we have documented the contract then any middleware should be able to handle it without special casing.
The functional requirements we have here, I think are:
The "Input and Error Streams" section reads:
This is a trivial DoS vector. I can eat this up indefinitely by sending a bottomless HTTP/2 stream. This is something you aren't supposed to do (later sections say you shouldn't read more than
CONTENT_LENGTH
), but I guarantee that users will do it.Some options:
Alternatively, and more dramatically (possibly too dramatic for this draft), we should just remove it entirely. This is a great way to let me blow up your server by sending you an unbounded number of bytes until you read them all in and run out of RAM. Is there any good reason to allow this footgun?
As a side note, it also plays havoc with flow control in HTTP/2.