Open hpvd opened 2 years ago
I know how to enable server push updates to the client with SSE, but not sure how to send updates in stream from client to server.
I checked the examples in this article[1] again, seems still not possible in firefox (also tested with default settings in latest version of chrome (stable release))
[1] https://developer.chrome.com/articles/fetch-streaming-requests/
With http2's header compression, maybe it's fine to send multiple request (instead of using a single request), and still get performance close to websocket in http1
so far so good :-)
hmm - is there nothing one could take from mentioned above sources 7, 6 and 5?
I just did a benchmark[1] on http2 request-response and websocket. Tested locally with https/wss.
The observed speed: request-response: 45 ms/cycle websocket: 1 ms/cycle
[1] https://github.com/beenotung/ts-liveview/tree/benchmark/benchmark
Maybe this test does not align with the real-world access pattern, maybe the bottleneck are located in different place :thinking:
thats interesting! Would reallly interesting what about the numbers in full real world example appliccation.
just a another deep link: 8) Consuming a fetch as a stream https://developer.mozilla.org/en-US/docs/Web/API/Streams_API/Using_readable_streams#consuming_a_fetch_as_a_stream
Regarding consuming a fetch response as stream, it may be easier to use Server-Event-Stream, it native reconnect mechanism. On the other hand, we cannot send fetch request as stream yet (expect multi-part form, but that still require all blobs of the file to be ready when creating the form data).
When mainstream browsers support TextEncoderStream [ref], or make it possible to stream ReadableStream as request body, it may be more lightweight alternative to websocket which having similar performance.
fetch examples consisting of live demos and its source code have been moved to: https://github.com/mdn/dom-examples/tree/master/fetch
Link to Demos: https://github.com/mdn/dom-examples/tree/master/fetch#fetch-examples
What do you think of this example? https://mdn.github.io/dom-examples/fetch/fetch-text/ Isn't this the direction we are looking for (to sent requests up)? (in combination with using SSE to directly push messages down in other usecases) => It's a fast and tiny answer to a click, full http/2... plz look at the screenshot of webdevtool
Doc incl browser compatibility cart: https://developer.mozilla.org/en-US/docs/Web/API/Fetch_API
source: https://github.com/mdn/dom-examples/blob/master/fetch/fetch-text/index.html
<!DOCTYPE html>
<html lang="en">
<head>
<meta charset="utf-8" />
<meta http-equiv="X-UA-Compatible" content="IE=edge,chrome=1" />
<meta name="viewport" content="width=device-width" />
<title>Fetch text example</title>
<link rel="stylesheet" href="style.css" />
</head>
<body>
<h1>Fetch text example</h1>
<ul>
<li><a data-page="page1">Page 1</a></li>
<li><a data-page="page2">Page 2</a></li>
<li><a data-page="page3">Page 3</a></li>
</ul>
<article></article>
</body>
<script>
const myArticle = document.querySelector("article");
const myLinks = document.querySelectorAll("ul a");
for (const link of myLinks) {
link.onclick = (e) => {
e.preventDefault();
const linkData = e.target.getAttribute("data-page");
getData(linkData);
};
}
function getData(pageId) {
console.log(pageId);
const myRequest = new Request(`${pageId}.txt`);
fetch(myRequest)
.then((response) => {
if (!response.ok) {
throw new Error(`HTTP error, status = ${response.status}`);
}
return response.text();
})
.then((text) => {
myArticle.innerText = text;
})
.catch((error) => {
myArticle.innerText = `Error: ${error.message}`;
});
}
</script>
</html>
browser compatibility:
added some more detail into comment above
or am I constantly thinking into a complete wrong direction with this
fetch
to sent a requests up andSSE
to push something directly down? And this is not solving the need in any way?
when clicking around, typical real world speed is
Distance to cover from browser to server is about 10k kilometers (at least when looking into geoip look up for server), cache was of course disabled. (can we be perfectly sure if this is the full loop?)
Converting the end-to-end time used for each request into refresh rate (Hz) is an interesting perspective. It allows us to evaluate whether an approach is good enough, rather than the raw throughput.
I did some benchmark testing end-to-end used time for API calling with websocket and ajax (fetch), it looks longer time then the text API demo, probably because the demo used static file but my benchmark API need to some some calculation and json encoding/decoding.
Locally tested result: | API approach | used time between making request and checking response |
---|---|---|
fetch | 30 - 45 ms/req | |
websocket | >1 ms/req |
Using fetch is slower than websocket but seems it can be good enough? With the benefit of easier handling for unstable network.
glad to see you are looking into details further!
just two questions regarding speed:
1) this is quite a big difference in speed in your benchmark between websocket and fetch. -> Do you have any idea what the reason could be (beside websocket efficiency)? Is data traveling the same way? Is maybe the proxy involved?
2) regarding speed difference between text demo and your benchmark: -> If its from
some calculation and json encoding/decoding
shouldn't this added time also be present in same height for websocket? Or could one follow if websocket is below 1ms, the time for calculation should be also well below 1ms since this is also part of websocket?
regarding the question: what is fast enough?
we had to first answer the question: fast enough for what?
the response time guidelines for web-based applications are the same as for all other applications. These guidelines have been the same for 46 years now, so they are also not likely to change with whatever implementation technology comes next.
https://www.nngroup.com/articles/response-times-3-important-limits/
when thinking about "fast enough for which use case?"...
If we agree with the done benchmark that
and with this we have to make a decision which speed route to go
The local test was done directly between browser and node.js server. The proxy was not involved.
I think we can implement fetch based transfer as an option, and make it easy to switch between different transport.
Right now we have ws-native and ws-lite option, fetch-native can be another option, and probably make it the default option to avoid extra http connection
use http/2 features instead of websocket
The idea was developed and discussed during an attempt to generally use http/2 instead of http/1 https://github.com/beenotung/ts-liveview/issues/4
Possible reasons to use http/2 solely instead of http/1or2 plus websocket a) easier to setup/install/deploy b) not having to take care for security and abuse of 2 different stacks (see this for additional security fields of WS: https://brightsec.com/blog/websocket-security-top-vulnerabilities/ ) c) not having to take care for performance of two stacks d) easier for features like auto reconnect (network break or change) e) easier to debug f) content delivery strategy has not to be split to two types of connections (e.g. when using things like preload see https://github.com/beenotung/ts-liveview/issues/14 or lazy loading) g) additional error source: ws could not be established learned in https://github.com/beenotung/ts-liveview/issues/15 h) ...
Just aggregated the main information/sources (from https://github.com/beenotung/ts-liveview/issues/4):
1) http/2 seems to support full bidi streaming, so there seems to be:
as far as I get, one need to use:
Details:
...
this and more details in: https://stackoverflow.com/a/42465368
edit: if this can really be confirmed and it works like this, I was exactly right yesterday with my casual comment "could it be matched by others protocols and advanced configurations" :-) -> at first, the client has to open the stream, similar to how the client has to initiate the WS upgrade... (so you could/should keep even the WS indicator badge on your demo, it only has to be renamed to:
stream open
)2) some more background on http/2 bidi streaming: https://web.dev/performance-http2/#streams-messages-and-frames
3) A Complete Guide to HTTP/2 in Node.js (With Example Code) https://www.sohamkamani.com/nodejs/http2/
4) there should be a way, since: fetch api to send requests up: 93% coverage at can-I-use https://caniuse.com/mdn-api_fetch and there seems to be also a polyfill if one really!? want/need more browser support (e.g. IE10) https://github.com/github/fetch
5) web docs: using fetch API https://developer.mozilla.org/en-US/docs/Web/API/Fetch_API/Using_Fetch
6) browser compatibility chart for using fetch API https://developer.mozilla.org/en-US/docs/Web/API/Fetch_API/Using_Fetch#browser_compatibility
7) many fetch examples: https://github.com/mdn/fetch-examples/
8) ...