Open harpocrates opened 3 years ago
MAX_CONCURRENT_STREAMS is a setting on HTTP/2 that a peer uses to tell the other peer how many concurrent streams should open.
When you set maxConcurrentStreams
on an HTTP/2 client to, for example, 23, what happens is that the client will tell the server "Hey! If you open streams, I'll stop accepting new streams when there are 23 streams open.".
In your code, the client tells the server it can't have more than 64 streams concurrently open. What the code doesn't specify is: on the server side, how many concurrent streams does the server allow?
Does that make sense? Ther's more info on the meaning of that setting in https://httpwg.org/specs/rfc7540.html#rfc.section.6.5.2 :-)
When you set
maxConcurrentStreams
on an HTTP/2 client to, for example, 23, what happens is that the client will tell the server "Hey! If you open streams, I'll stop accepting new streams when there are 23 streams open.".
The question is whether that setting makes sense in that way on the client or if we should interpret it differently. I think there's an open ticket about that somewhere. I'll try to find it.
In your code, the client tells the server it can't have more than 64 streams concurrently open. What the code doesn't specify is: on the server side, how many concurrent streams does the server allow?
That does make sense, thank you. However, I'm still left wondering now: at what point does the http2
client flow ever back-pressure based on a signal from the server? Is the server also supposed to use MAX_CONCURRENT_STREAMS
to inform the client of its maximum number of streams?
In case it is interesting, the reason for this issue was that when I used the http2
client with Amazon ALB and tracked pending requests, I noticed that the flow would lock up (as in: stay alive, not accept any new requests, but still eventually return responses for already accepted requests) as soon as the number of pending requests reached 128 (so not 128 requests in total, just the first time there where 128 pending requests). This number stands out because it is apparently the maximum number of requests supported by ALB over HTTP/2:
Application Load Balancers provide native support for HTTP/2 with HTTPS listeners. You can send up to 128 requests in parallel using one HTTP/2 connection. ...
All this made me think that maybe Akka wasn't respecting some setting. Then again, the issue could entirely be on the AWS server end, in which case I might need to put our client http2
flow on some sort of semaphore-like bidi-flow to artificially bound the number of requests in flight at any given moment...
That does make sense, thank you. However, I'm still left wondering now: at what point does the
http2
client flow ever back-pressure based on a signal from the server?
Yes, it does. As soon as it receives the MAX_CONCURRENT_STREAMS
setting from the server it should be respected. There's probably a short time period at the beginning of a connection before the settings have been received where the client assumes it is allowed to send an unlimited amount of requests.
Is the server also supposed to use
MAX_CONCURRENT_STREAMS
to inform the client of its maximum number of streams?
Yes, it should. In any case, it the server receives more streams than it wants to handle it has to reject the additional streams.
I noticed that the flow would lock up (as in: stay alive, not accept any new requests, but still eventually return responses for already accepted requests) as soon as the number of pending requests reached 128
What should happen is that client should pick up further requests when one of those initial 128 (or whatever announced by the server) requests has completed. Could you try setting akka.http.client.http2.log-frames = true
and see if that would confirm your suspicion? You could also post the output here (would probably need some redaction to avoid sharing private data).
I'm not sure when (if) I'll have time to come back to this. Given that, would you prefer I close the issue in the meantime? I gather this is probably a bug in ALBs handling of too many concurrent streams.
Based on the documentation of
akka.http.http2.max-concurrent-streams
, I would expect thatOutgoingConnectionBuilder#http2()
would back pressure when as soon asmax-concurrent-streams
is reached. However, this doesn't seem to be the case:When I run the above, I see
Pending: 100
printed, which I understand to mean that more than 64 requests have made it into thehttp2
flow.