Open analogrelay opened 4 years ago
Should do something about https://github.com/dotnet/aspnetcore/issues/12662
OR close this as a dupe 😄
This is the epic, so we'll keep it open
Yep, was about to close that actually.
Also we're learning there are big issues around how clients behave when they hit various limits in HTTP/2 especially because we plan on having long running requests.
TL;DR Some clients don't open new connections when they hit the concurrent streams limit.
TL;DR Some clients don't open new connections when they hit the concurrent streams limit.
Context: https://github.com/dotnet/runtime/issues/35088
Concurrent connection limits have always been an issue with SignalR. Browsers and the .NET framework used to enforce really low per-host concurrent connection limits. Browsers still do enforce low limits for non-WebSocket connections.
Over time, browsers have gradually increased their WebSocket limits. Now it seems to be up to 255 concurrent WebSocket connections per host in Chrome and 200 in firefox (based on network.websocket.max-connections in about:config).
If a server uses 100 SETTINGS_MAX_CONCURRENT_STREAMS like Kestrel does by default, that's at most 50 concurrent SignalR "connections" using this new streaming transport over HTTP/2. That by itself doesn't seem so bad, but unlike with WebSockets (which have their own connection pool), hitting this limit could affect "normal" requests.
Tangentially, we'll probably want to use negotiate or something to detect HTTP/2, I doubt we'd want to use this transport if we're making HTTP/1.1 requests.
We've moved this issue to the Backlog milestone. This means that it is not going to be worked on for the coming release. We will reassess the backlog following the current release and consider this item at that time. To learn more about our issue management process and to have better expectation regarding different types of issues you can read our Triage Process.
The idea here is to build a new transport based on HTTP streaming. The idea is for the client to establish two long-running HTTP requests. Termination of either request by either party tears down the whole connection (and terminates the other request). This transport is designed for HTTP/2 environments, and the transport would likely be designed to function only when the requests are actually using HTTP/2.
One request serves as the "upstream" connection. The client sends the request headers (including auth) and then streams data to the server. The server does not write any response to this request until the connection terminates.
The other request serves as the "downstream" connection. The client sends request headers and an empty body. The server sends a long-running response back and writes data to it as data becomes available. The server concludes the response when the connection is terminated.
Why two requests? We could have a single request with both the client and server streaming request/response data. This is permitted in the HTTP protocol. However, support for this in all platforms (Browser, .NET, Node, Java, etc.) is limited and inconsistent. Doing this two-stream model doesn't stop us from doing a bi-directional transport in the future.
With this transport, we can also consider deprecating Server-Sent Events. The SSE transport provides a middle-ground between Long Polling and WebSockets and has served us well but it has limitations:
Work Items (see note below)
Note: This is something that's fairly low on our priority list for now and may not make 5.0, so we've only created issues to track the initial work (rather than spamming the tracker with a bunch of small issues). As we move forward, we'll create work items to track the rest of the work.