jetty / jetty.project

Eclipse Jetty® - Web Container & Clients - supports HTTP/2, HTTP/1.1, HTTP/1.0, websocket, servlets, and more
https://eclipse.dev/jetty
Other
3.87k stars 1.91k forks source link

h2 server responses exceeding SETTINGS_MAX_HEADER_LIST_SIZE do not result in RST_STREAM or GOAWAY #11822

Closed niloc132 closed 3 months ago

niloc132 commented 6 months ago

Jetty version(s)

First observed in Jetty 10.0.16, 11.0.16, 12.0.0.beta2. Latest versions (10.0.20, 11.0.20, 12.0.9) all exhibit the same bug. Appears to have been introduced by https://github.com/jetty/jetty.project/commit/420ec7cc1db65b42693dc002aca0faa567f62f7e.

Jetty Environment Observed with both ee8 and ee9.

Java version/vendor (use: java -version)

$ java -version
openjdk version "21.0.3" 2024-04-16
OpenJDK Runtime Environment (build 21.0.3+9)
OpenJDK 64-Bit Server VM (build 21.0.3+9, mixed mode, sharing)

Also observed with Java 8 for Jetty 10, Java 11 for Jetty 11.

OS type/version Linux

Description When a client and server connect, they exchange SETTINGS frames, which permits the client to specify a max size for a headers list. One of those settings is SETTINGS_MAX_HEADER_LIST_SIZE, which "informs a peer of the maximum size of header list that the sender is prepared to accept, in octets".

Later, the spec says "An endpoint can use the SETTINGS_MAX_HEADER_LIST_SIZE to advise peers of limits that might apply on the size of header blocks. This setting is only advisory, so endpoints MAY choose to send header blocks that exceed this limit and risk having the request or response being treated as malformed."

The https://github.com/jetty/jetty.project/commit/420ec7cc1db65b42693dc002aca0faa567f62f7e commit in Jetty improves handling for all things headers related, and introduced handling of this max header size.

The current behavior is that when a HEADERS frame would exceed that limit, the server will end the entire connection abruptly, with TCP FIN rather than killing the affected stream with RST_STREAM, or the h2 connection with GOAWAY. This is the source of the buggy behavior being called out.

It appears that ending the connection SHOULD instead be (first) done with GOAWAY, https://httpwg.org/specs/rfc7540.html#rfc.section.5.4.1,

An endpoint that encounters a connection error SHOULD first send a GOAWAY frame (Section 6.8) with the stream identifier of the last stream that it successfully received from its peer. The GOAWAY frame includes an error code that indicates why the connection is terminating. After sending the GOAWAY frame for an error condition, the endpoint MUST close the TCP connection.

It perhaps would also be possible to make this only a stream error and leave the connection running for other streams via RST_STREAM.

The bug that this introduces is that while with a GOAWAY, the client knows there was an error that the server observed and shut down the connection, while without, clients do not know what happened, and might attempt to reconnect.

Specifically, this is introducing a failure in gRPC-java support for Jetty, where the client cannot tell why the upstream stopped responding - handling of GOAWAY for gRPC is to interpret the failure as INTERNAL, while UNAVAILABLE is the result when the socket just ends as Jetty is doing, which is interpreted to be transient (i.e. nothing that a well behaved server would deliberately do).

Also note, this does not happen with http/1.1 in Jetty, instead a 500 error is sent to the client.

How to reproduce? Start a h2 server running one of the affected versions, with a simple handler (here using servlets) to send a giant payload:

public class Server {
    public static void main(String[] args) throws Exception {
        org.eclipse.jetty.server.Server s = new org.eclipse.jetty.server.Server(10000);
        ServerConnector sc = (ServerConnector) s.getConnectors()[0];
        HttpConfiguration httpConfiguration = new HttpConfiguration();

        HTTP2CServerConnectionFactory factory =
                new HTTP2CServerConnectionFactory(httpConfiguration) {
                    @Override
                    protected ServerSessionListener newSessionListener(Connector connector,
                                                                       EndPoint endPoint) {
                        return new HTTPServerSessionListener(endPoint) {
                            @Override
                            public void onSettings(Session session, SettingsFrame frame) {
                                HTTP2Session s = (HTTP2Session) session;
                                System.out.println("Client max: " + s.getGenerator().getHpackEncoder().getMaxHeaderListSize());
                            }
                        };
                    }
                };
        factory.setRateControlFactory(new RateControl.Factory() {
        });
        sc.addConnectionFactory(factory);

        ServletContextHandler context =
                new ServletContextHandler(ServletContextHandler.SESSIONS);
        context.setContextPath("/");
        context.addServlet(new ServletHolder(new HugeHeaderServlet()), "/*");
        s.setHandler(context);

        s.start();
        s.join();
    }
}
public class HugeHeaderServlet extends HttpServlet {
    // Jetty client defaults to 8192
    private static final String HUGE_HEADER_VALUE = "a".repeat( 16 * 1024);

    @Override
    protected void doGet(HttpServletRequest req, HttpServletResponse resp) throws ServletException, IOException {
        resp.setHeader("Huge", HUGE_HEADER_VALUE);
    }
}

Attempt to connect with an h2 client with a header limit below the payload size:

public class Client {
    public static void main(String[] args) throws Exception {
        HTTP2Client client = new HTTP2Client();
        client.setMaxResponseHeadersSize(8 * 1024);
        client.setMaxFrameSize(1024 * 1024);
        client.start();
        CompletableFuture<Session> sessionPromise = client.connect(new InetSocketAddress("localhost", 10000), new ServerSessionListener() {
            @Override
            public void onAccept(Session session) {
                ServerSessionListener.super.onAccept(session);
            }
        });
        Session session = sessionPromise.get(5, TimeUnit.SECONDS);
        MetaData request = new MetaData.Request("GET", HttpURI.from("http://localhost:10000/"), HttpVersion.HTTP_2, null);
        CompletableFuture<Stream> streamPromise = session.newStream(new HeadersFrame(request, null, false), new Stream.Listener() {
            @Override
            public void onHeaders(Stream stream, HeadersFrame frame) {
                System.out.println("HEADERS: " + frame);
            }

            @Override
            public void onFailure(Stream stream, int error, String reason, Throwable failure, Callback callback) {
                System.out.println("FAILURE: " + reason);
            }

            @Override
            public void onReset(Stream stream, ResetFrame frame, Callback callback) {
                System.out.println("RESET: " + frame);
            }
        });

        streamPromise.get(5, TimeUnit.SECONDS);
    }
}
sbordet commented 3 months ago

A few clarifications:

Note that even if the spec hints to send a 431 for the first case, we tear down the connection because we do not want to risk an attack where the HPACK context is tainted with large headers (@lachlan-roberts do you confirm?). And in the second case, we detect the headers are too large when we possibly have already encoded other headers, so now the HPACK context is in an unrecoverable state and we can only tear down the connection.

sbordet commented 3 months ago

@niloc132 can you try #12165?

lachlan-roberts commented 3 months ago

Note that even if the spec hints to send a 431 for the first case, we tear down the connection because we do not want to risk an attack where the HPACK context is tainted with large headers (@lachlan-roberts do you confirm?).

I don't think an attack is the concern, because if headers are added beyond the SETTINGS_HEADER_TABLE_SIZE then other headers will just be evicted.

I think the issue is that we abort decoding the headers as soon as we see it exceed the MAX_HEADER_LIST_SIZE limit, so the instructions to add to the table have not been fully parsed. And so if we continue the HPACK table will have an inconsistent state between the encoder and the decoder.