beenotung / ts-liveview

Build hybrid SSG and SSR realtime SPA/MPA with Typescript
https://liveviews.cc
BSD 2-Clause "Simplified" License
168 stars 2 forks source link

use http/2 features instead of websocket #13

Open hpvd opened 2 years ago

hpvd commented 2 years ago

use http/2 features instead of websocket

The idea was developed and discussed during an attempt to generally use http/2 instead of http/1 https://github.com/beenotung/ts-liveview/issues/4

Possible reasons to use http/2 solely instead of http/1or2 plus websocket a) easier to setup/install/deploy b) not having to take care for security and abuse of 2 different stacks (see this for additional security fields of WS: https://brightsec.com/blog/websocket-security-top-vulnerabilities/ ) c) not having to take care for performance of two stacks d) easier for features like auto reconnect (network break or change) e) easier to debug f) content delivery strategy has not to be split to two types of connections (e.g. when using things like preload see https://github.com/beenotung/ts-liveview/issues/14 or lazy loading) g) additional error source: ws could not be established learned in https://github.com/beenotung/ts-liveview/issues/15 h) ...

Just aggregated the main information/sources (from https://github.com/beenotung/ts-liveview/issues/4):

1) http/2 seems to support full bidi streaming, so there seems to be:

as far as I get, one need to use:

Details:

Articles like this (linked in another answer) are wrong about this aspect of HTTP/2. They say it's not bidi. Look, there is one thing that can't happen with HTTP/2: After the connection is opened, the server can't initiate a regular stream, only a push stream. But once the client opens a stream by sending a request, both sides can send DATA frames across a persistent socket at any time - full bidi.

That's not much different from websockets: the client has to initiate a websocket upgrade request before the server can send data across, too.

...

If you need to build a real-time chat app, let's say, where you need to broadcast new chat messages to all the clients in the chat room that have open connections, you can (and probably should) do this without websockets.

You would use Server-Sent Events to push messages down and the Fetch api to send requests up. Server-Sent Events (SSE) is a little-known but well supported API that exposes a message-oriented server-to-client stream. Although it doesn't look like it to the client JavaScript, under the hood your browser (if it supports HTTP/2) will reuse a single TCP connection to multiplex all of those messages. There is no efficiency loss and in fact it's a gain over websockets because all the other requests on your page are also sharing that same TCP connection. Need multiple streams? Open multiple EventSources! They'll be automatically multiplexed for you.

this and more details in: https://stackoverflow.com/a/42465368

edit: if this can really be confirmed and it works like this, I was exactly right yesterday with my casual comment "could it be matched by others protocols and advanced configurations" :-) -> at first, the client has to open the stream, similar to how the client has to initiate the WS upgrade... (so you could/should keep even the WS indicator badge on your demo, it only has to be renamed to: stream open)

2) some more background on http/2 bidi streaming: https://web.dev/performance-http2/#streams-messages-and-frames

3) A Complete Guide to HTTP/2 in Node.js (With Example Code) https://www.sohamkamani.com/nodejs/http2/

4) there should be a way, since: fetch api to send requests up: 93% coverage at can-I-use https://caniuse.com/mdn-api_fetch and there seems to be also a polyfill if one really!? want/need more browser support (e.g. IE10) https://github.com/github/fetch

5) web docs: using fetch API https://developer.mozilla.org/en-US/docs/Web/API/Fetch_API/Using_Fetch

6) browser compatibility chart for using fetch API https://developer.mozilla.org/en-US/docs/Web/API/Fetch_API/Using_Fetch#browser_compatibility

7) many fetch examples: https://github.com/mdn/fetch-examples/

8) ...

beenotung commented 2 years ago

I know how to enable server push updates to the client with SSE, but not sure how to send updates in stream from client to server.

I checked the examples in this article[1] again, seems still not possible in firefox (also tested with default settings in latest version of chrome (stable release))

[1] https://developer.chrome.com/articles/fetch-streaming-requests/

With http2's header compression, maybe it's fine to send multiple request (instead of using a single request), and still get performance close to websocket in http1

hpvd commented 2 years ago

so far so good :-)

hmm - is there nothing one could take from mentioned above sources 7, 6 and 5?

beenotung commented 2 years ago

I just did a benchmark[1] on http2 request-response and websocket. Tested locally with https/wss.

The observed speed: request-response: 45 ms/cycle websocket: 1 ms/cycle

[1] https://github.com/beenotung/ts-liveview/tree/benchmark/benchmark

Maybe this test does not align with the real-world access pattern, maybe the bottleneck are located in different place :thinking:

hpvd commented 2 years ago

thats interesting! Would reallly interesting what about the numbers in full real world example appliccation.

hpvd commented 2 years ago

just a another deep link: 8) Consuming a fetch as a stream https://developer.mozilla.org/en-US/docs/Web/API/Streams_API/Using_readable_streams#consuming_a_fetch_as_a_stream

beenotung commented 2 years ago

Regarding consuming a fetch response as stream, it may be easier to use Server-Event-Stream, it native reconnect mechanism. On the other hand, we cannot send fetch request as stream yet (expect multi-part form, but that still require all blobs of the file to be ready when creating the form data).

When mainstream browsers support TextEncoderStream [ref], or make it possible to stream ReadableStream as request body, it may be more lightweight alternative to websocket which having similar performance.

hpvd commented 2 years ago

fetch examples consisting of live demos and its source code have been moved to: https://github.com/mdn/dom-examples/tree/master/fetch

Link to Demos: https://github.com/mdn/dom-examples/tree/master/fetch#fetch-examples

hpvd commented 2 years ago

What do you think of this example? https://mdn.github.io/dom-examples/fetch/fetch-text/ Isn't this the direction we are looking for (to sent requests up)? (in combination with using SSE to directly push messages down in other usecases) => It's a fast and tiny answer to a click, full http/2... plz look at the screenshot of webdevtool

Doc incl browser compatibility cart: https://developer.mozilla.org/en-US/docs/Web/API/Fetch_API

2022-08-16_22h30_49

source: https://github.com/mdn/dom-examples/blob/master/fetch/fetch-text/index.html

<!DOCTYPE html>
<html lang="en">
  <head>
    <meta charset="utf-8" />
    <meta http-equiv="X-UA-Compatible" content="IE=edge,chrome=1" />
    <meta name="viewport" content="width=device-width" />

    <title>Fetch text example</title>

    <link rel="stylesheet" href="style.css" />
  </head>

  <body>
    <h1>Fetch text example</h1>
    <ul>
      <li><a data-page="page1">Page 1</a></li>
      <li><a data-page="page2">Page 2</a></li>
      <li><a data-page="page3">Page 3</a></li>
    </ul>
    <article></article>
  </body>
  <script>
    const myArticle = document.querySelector("article");
    const myLinks = document.querySelectorAll("ul a");
    for (const link of myLinks) {
      link.onclick = (e) => {
        e.preventDefault();
        const linkData = e.target.getAttribute("data-page");
        getData(linkData);
      };
    }

    function getData(pageId) {
      console.log(pageId);
      const myRequest = new Request(`${pageId}.txt`);

      fetch(myRequest)
        .then((response) => {
          if (!response.ok) {
            throw new Error(`HTTP error, status = ${response.status}`);
          }
          return response.text();
        })
        .then((text) => {
          myArticle.innerText = text;
        })
        .catch((error) => {
          myArticle.innerText = `Error: ${error.message}`;
        });
    }
  </script>
</html>

browser compatibility: 2022-08-16_23h11_59

hpvd commented 2 years ago

added some more detail into comment above

hpvd commented 2 years ago

or am I constantly thinking into a complete wrong direction with this

? And this is not solving the need in any way?

hpvd commented 2 years ago

when clicking around, typical real world speed is

Distance to cover from browser to server is about 10k kilometers (at least when looking into geoip look up for server), cache was of course disabled. (can we be perfectly sure if this is the full loop?) 2022-08-16_23h33_29

beenotung commented 2 years ago

Converting the end-to-end time used for each request into refresh rate (Hz) is an interesting perspective. It allows us to evaluate whether an approach is good enough, rather than the raw throughput.

I did some benchmark testing end-to-end used time for API calling with websocket and ajax (fetch), it looks longer time then the text API demo, probably because the demo used static file but my benchmark API need to some some calculation and json encoding/decoding.

Locally tested result: API approach used time between making request and checking response
fetch 30 - 45 ms/req
websocket >1 ms/req

Using fetch is slower than websocket but seems it can be good enough? With the benefit of easier handling for unstable network.

hpvd commented 2 years ago

glad to see you are looking into details further!

just two questions regarding speed:

1) this is quite a big difference in speed in your benchmark between websocket and fetch. -> Do you have any idea what the reason could be (beside websocket efficiency)? Is data traveling the same way? Is maybe the proxy involved?

2) regarding speed difference between text demo and your benchmark: -> If its from

some calculation and json encoding/decoding

shouldn't this added time also be present in same height for websocket? Or could one follow if websocket is below 1ms, the time for calculation should be also well below 1ms since this is also part of websocket?

hpvd commented 2 years ago

regarding the question: what is fast enough?

we had to first answer the question: fast enough for what?

  1. for a multiplayer game? Then we should be end to end in the field of 120Hz (8ms), or
  2. for a "realtime" UI where users are feeling that they are directly manipulating objects, 6Hz (100ms) seems to be enough

the response time guidelines for web-based applications are the same as for all other applications. These guidelines have been the same for 46 years now, so they are also not likely to change with whatever implementation technology comes next.

https://www.nngroup.com/articles/response-times-3-important-limits/

hpvd commented 2 years ago

when thinking about "fast enough for which use case?"...

If we agree with the done benchmark that

and with this we have to make a decision which speed route to go

hpvd commented 2 years ago

added g) in https://github.com/beenotung/ts-liveview/issues/13#issue-1332245716

beenotung commented 2 years ago

The local test was done directly between browser and node.js server. The proxy was not involved.

I think we can implement fetch based transfer as an option, and make it easy to switch between different transport.

Right now we have ws-native and ws-lite option, fetch-native can be another option, and probably make it the default option to avoid extra http connection