Closed niloc132 closed 3 months ago
A few clarifications:
Note that even if the spec hints to send a 431 for the first case, we tear down the connection because we do not want to risk an attack where the HPACK context is tainted with large headers (@lachlan-roberts do you confirm?). And in the second case, we detect the headers are too large when we possibly have already encoded other headers, so now the HPACK context is in an unrecoverable state and we can only tear down the connection.
@niloc132 can you try #12165?
Note that even if the spec hints to send a 431 for the first case, we tear down the connection because we do not want to risk an attack where the HPACK context is tainted with large headers (@lachlan-roberts do you confirm?).
I don't think an attack is the concern, because if headers are added beyond the SETTINGS_HEADER_TABLE_SIZE
then other headers will just be evicted.
I think the issue is that we abort decoding the headers as soon as we see it exceed the MAX_HEADER_LIST_SIZE
limit, so the instructions to add to the table have not been fully parsed. And so if we continue the HPACK table will have an inconsistent state between the encoder and the decoder.
Jetty version(s)
First observed in Jetty 10.0.16, 11.0.16, 12.0.0.beta2. Latest versions (10.0.20, 11.0.20, 12.0.9) all exhibit the same bug. Appears to have been introduced by https://github.com/jetty/jetty.project/commit/420ec7cc1db65b42693dc002aca0faa567f62f7e.
Jetty Environment Observed with both ee8 and ee9.
Java version/vendor
(use: java -version)
Also observed with Java 8 for Jetty 10, Java 11 for Jetty 11.
OS type/version Linux
Description When a client and server connect, they exchange SETTINGS frames, which permits the client to specify a max size for a headers list. One of those settings is SETTINGS_MAX_HEADER_LIST_SIZE, which "informs a peer of the maximum size of header list that the sender is prepared to accept, in octets".
Later, the spec says "An endpoint can use the SETTINGS_MAX_HEADER_LIST_SIZE to advise peers of limits that might apply on the size of header blocks. This setting is only advisory, so endpoints MAY choose to send header blocks that exceed this limit and risk having the request or response being treated as malformed."
The https://github.com/jetty/jetty.project/commit/420ec7cc1db65b42693dc002aca0faa567f62f7e commit in Jetty improves handling for all things headers related, and introduced handling of this max header size.
The current behavior is that when a HEADERS frame would exceed that limit, the server will end the entire connection abruptly, with TCP FIN rather than killing the affected stream with RST_STREAM, or the h2 connection with GOAWAY. This is the source of the buggy behavior being called out.
It appears that ending the connection SHOULD instead be (first) done with GOAWAY, https://httpwg.org/specs/rfc7540.html#rfc.section.5.4.1,
It perhaps would also be possible to make this only a stream error and leave the connection running for other streams via RST_STREAM.
The bug that this introduces is that while with a GOAWAY, the client knows there was an error that the server observed and shut down the connection, while without, clients do not know what happened, and might attempt to reconnect.
Specifically, this is introducing a failure in gRPC-java support for Jetty, where the client cannot tell why the upstream stopped responding - handling of GOAWAY for gRPC is to interpret the failure as
INTERNAL
, whileUNAVAILABLE
is the result when the socket just ends as Jetty is doing, which is interpreted to be transient (i.e. nothing that a well behaved server would deliberately do).Also note, this does not happen with http/1.1 in Jetty, instead a 500 error is sent to the client.
How to reproduce? Start a h2 server running one of the affected versions, with a simple handler (here using servlets) to send a giant payload:
Attempt to connect with an h2 client with a header limit below the payload size: