Closed ebrawer closed 1 year ago
There are two different approaches in terms of SSR: renderToString
(render the page to a string and then send the string) and renderToStream
(stream the content of the page instead of waiting). The former is what Nuxt (and vue-server-renderer) is doing out of the box - the latter is what you are aiming at. Unfortunately, there are some huge downsides.
In stream rendering mode, data is emitted as soon as possible when the renderer traverses the Virtual DOM tree. This means we can get an earlier "first chunk" and start sending it to the client faster.
However, when the first data chunk is emitted, the child components may not even be instantiated yet, neither will their lifecycle hooks get called. This means if the child components need to attach data to the render context in their lifecycle hooks, these data will not be available when the stream starts. Since a lot of the context information (like head information or inlined critical CSS) needs to appear before the application markup, we essentially have to wait until the stream to complete before we can start making use of these context data.
It is therefore NOT recommended to use streaming mode if you rely on context data populated by component lifecycle hooks.
Further read: https://ssr.vuejs.org/guide/streaming.html#streaming-caveats
If the intent is just to to reduce the TTFB, perhaps the suggested approach of just sending some generally common content (perhaps just opening tag of html tag as suggested above) might get around the downsides described above?
eg. just send out a '<' character in the stream, let the rendering engine do its thing with all the right context information available, then send the rest of the content.
If the intent is just to to reduce the TTFB, perhaps the suggested approach of just sending some generally common content (perhaps just opening tag of html tag as suggested above) might get around the downsides described above?
eg. just send out a '<' character in the stream, let the rendering engine do its thing with all the right context information available, then send the rest of the content.
Exactly. @manniL imagine a third option:
renderToString
: render the page to a string and then send the string.renderToStream
: stream the content of the page instead of waiting.sendFirstByteThenRenderToString
: send a single byte and then render the page to a string and then send the string.Is it possible? This would solve the problem of TTFB destroying the Lighthouse score under SSR.
There are other downsides to streaming. For example, Nuxt wouldn't be able to send the Etag
header as it needs to be calculated from the full response, and starting the response means that you can't send headers anymore.
There are other downsides to streaming. For example, Nuxt wouldn't be able to send the
Etag
header as it needs to be calculated from the full response, and starting the response means that you can't send headers anymore.
Could beginning to send the headers count as the First Byte then, as opposed to sending a body byte?
E.g. send the server
header or anything that is immutable, assuming that counts as the first byte?
E.g. send the
server
header or anything that is immutable, assuming that counts as the first byte?
I'm wondering what the benefit would be (besides "artificially" improving the TTFB value). That behavior wouldn't have a positive influence on the user's perception nor the loading time itself, right?
I'm wondering what the benefit would be (besides "artificially" improving the TTFB value). That behavior wouldn't have a positive influence on the user's perception nor the loading time itself, right?
In this case quoting Wiki ( https://en.wikipedia.org/wiki/Time_to_first_byte ):
TTFB is often used by web search engines like Google and Yahoo to improve search rankings since a website will respond to the request faster and be usable before other websites would be able to. There are downsides to this metric since a web-server can send only the first part of the header before the content is even ready to send to reduce their TTFB. While this may seem deceptive it can be used to inform the user that the webserver is in fact active and will respond with content shortly. There are several reasons why this deception is useful, including that it causes a persistent connection to be created, which results in fewer retry attempts from a browser or user since it has already received a connection and is now preparing for the content download.
I don't believe this would be artificially improving the TTFB. Receiving the First Byte means one thing to the client: there are life-signs on the other side, and we have a connection. I would actually posit the opposite: Nuxt SSR currently has an artificially high TTFB since it doesn't send anything until everything is ready, making the client think the server has more latency than it really has.
As to TTFB as a metric, it is a measurement of "how quickly did this server begin to send its response" – otherwise it wouldn't be measuring for the first byte. There are other metrics that try to measure other things that are more visually relevant to the user (First Contentful Paint, Time to Interactive).
If Google wanted to measure TTPH (Time to Page Head), it could know how quickly pages sent the head tag. That could be another interesting measurement, as one a head tag is loaded, a bunch of other requests can be made by the browser. But that is another matter altogether.
I think there is no downside to having beginning the response as quickly as possible under SSR. It tells the client they have connected, instead of making them think the server is unresponsive. It also removes this massive penalty that Google gives servers with slow response rates in order to incentivize engineers to improve their infrastructure. I also believe everyone would enable it.
If the intent is just to to reduce the TTFB, perhaps the suggested approach of just sending some generally common content (perhaps just opening tag of html tag as suggested above) might get around the downsides described above? eg. just send out a '<' character in the stream, let the rendering engine do its thing with all the right context information available, then send the rest of the content.
Exactly. @manniL imagine a third option:
renderToString
: render the page to a string and then send the string.renderToStream
: stream the content of the page instead of waiting.sendFirstByteThenRenderToString
: send a single byte and then render the page to a string and then send the string.Is it possible? This would solve the problem of TTFB destroying the Lighthouse score under SSR.
I think this is kind of cheating. The whole purpose of the lighthouse (or web vital) is generally to have a good UX. The scores just aren't a number. They are the presentation of your page UX!
I think this is kind of cheating. The whole purpose of the lighthouse (or web vital) is generally to have a good UX. The scores just aren't a number. They are the presentation of your page UX!
Lighthouse takes specific measurements, and uses them as part of several scores, which they use as proxies for user experience.
The specific measurement we are talking about, "Time to first byte", it a measure of server response time. It is up to Lighthouse to weigh these measurements. By basing performance scores on this measurement, Lighthouse (and Google) are essentially telling developers: make your servers respond faster.
Proposal: rename time-to-first-byte to server-response-time https://github.com/GoogleChrome/lighthouse/issues/10720
In fact, Lighthouse has decided to rename TTFB to Server Response Time. Having SSR not respond to the client until the entire page is generated is not some sort of ethical design decision. It is actually misleading Lighthouse into thinking the server has a latency problem. This in turn strongly penalizes Nuxt applications in search rankings, because Google thinks they are running on high latency servers.
Lighthouse has tons of metrics to know when milestones past the initial server response have been reached. There is no "cheating".
In terms of UX: clients will know they have connected to the other side earlier, allowing them to display feedback in terms of connecting to the app.
Making Nuxt responsive as soon as possible is the equivalent of yelling "I'm coming" when someone knocks at the door.
Making Nuxt responsive as soon as possible is the equivalent of yelling "I'm coming" when someone knocks at the door.
@ebrawer I liked this metaphor. Thanks for your explanation. I just considered the words of TTFB to share my opinion. But you explain something else that mater to the lighthouse. It now makes sense!
In terms of UX: clients will know they have connected to the other side earlier, allowing them to display feedback in terms of connecting to the app.
Have you checked that there is actually a visible difference to the user in those two cases (starting sending headers early vs waiting until page is ready)?
Have you checked that there is actually a visible difference to the user in those two cases (starting sending headers early vs waiting until page is ready)?
@rchl This is client specific (i.e. browser specific). A few common ones:
Waiting for {website_url}...
until the server has responded.{website_domain}
to other progress once the server has responded.From a subjective point of view, once those pre-connection indicators have displayed for more than a split second, I become anxious that the website is down or that my connection is down.
I do wonder if the browser actually changes the progress indication after receiving partial headers. It could be that it only does that after receiving all headers. That's why I'm asking if you have specifically tested this case.
I do wonder if the browser actually changes the progress indication after receiving partial headers. It could be that it only does that after receiving all headers. That's why I'm asking if you have specifically tested this case.
@rchl Ah got it. I haven't been able to test specifically sending a first header, but just noting here what I have found in the Chromium source code documentation (so this is presumably the behaviour for certainly Chrome and most probably Safari):
https://www.chromium.org/developers/design-documents/webnavigation-api-internals
WebContentsObserver::DidStartProvisionalLoadForFrame At this point, the URL load is about to start, but might never commit (invalid URL, download, etc..). Only when the subsequently triggered resource load actually succeeds and results in a navigation, we will know what URL is going to be displayed.
WebContentsObserver::DidCommitProvisionalLoadForFrame At this point, the navigation was committed, i.e. we received the first headers, and an history entry was created.
This covers how RenderViewHost works. According to it, navigation transitions to a "committed" state once the first headers are received.
But anyhow I agree it would be good to test this empirically.
@rchl I stood up a node express server with just a write call on the resource (no end call). When connecting, a few headers are sent out to the client, including Transfer-Encoding: chunked
.
Visual results (Chrome):
The browser doesn't display anything (normal since there is no body yet), but it no longer shows signs of attempting to connect (since it has connected). Practically, this means that the "Waiting for {website_url}..." goes away the moment the first headers have arrived.
@ebrawer I really like the idea that you described. For the past year of performance optimizations of Nuxt based projects I must say that TTFB was the biggest pain point when it comes to measuring tools scrores, such as GTMetrix or Lighthouse. I was able to optimize pretty much everything except high TTFB, the only thing that helped somewhat is using component caching and load more content on the page lazily, but that in turn hurt SEO since it means less content of the page will be visible to a crawler and it was really tough for us to negotiate that with our SEO agency which demanded much more content on pages. And speaking about the affect TTFB adds to Lighthouse is just tremendous. I'm talking about 30-40% of performance degradation just because of the bad TTFB and this is not only UX suggestion as some pointed out here, it's a real metric that affects Google rankings from the day when Google introduced Web Vitals. @manniL I guess the main question here is how architecturally challenging this is from a coding perspective to implement? And what real downsides this may introduce to the framework?
@ebrawer
One of my concerns is that just improving the TTFB may not help at all! In the web vital we have LCP that measures the speed of the page. If we send the headers as soon as possible but the content still delayed, LCP hasn't changed at all. it's just improving in TTFB and not LCP! Am I right?
One of my concerns is that just improving the TTFB may not help at all! In the web vital we have LCP that measures the speed of the page. If we send the headers as soon as possible but the content still delayed, LCP hasn't changed at all. it's just improving in TTFB and not LCP! Am I right?
Improving TTFB won't affect LCP. These are two independent measurements that contribute to the LightHouse score. It will improve the TTFB measurement and therefore the overall LightHouse score. But it won't improve the LCP measurement.
@ms-fadaei LCP is completely different metric and it usually defined by how your content is being rendered. One example of bad LCP - you got a full-screen non-optimized image or video that you load lazily after all other content - you will get a blank screen and your image will load only once all CSS and scripts have been executed - so LCP will be very bad in this case but that doesn't mean that all other metrics will depend on this one. In short - it should be handled separately and not necessarily relate to TTFB
@ebrawer @AndrewBogdanovTSS refer to this:
The longer it takes a browser to receive content from the server, the longer it takes to render anything on the screen. A faster server response time directly improves every single page-load metric, including LCP. Before anything else, improve how and where your server handles your content. Use Time to First Byte (TTFB) to measure your server response times.
So I think described strategy can't improve LCP, it just improves TTFB itself (and not anything related to it, like FCP and LCP). In the lighthouse, TTFB does not affect the score independently, but improving that can help the metrics like LCP and FCP.
I like the idea to decrease the TTFB, but just looking for a way that really improves the performance, not just TTFB itself!
Well, improving performance is very complicated topic. This ticket was created specifically for TTFB improvements, don't see any reason to extend it's scope to any other metric discussion
I experience the exact same problems with TTFB and really like the idea of "Received Request, sending acknowledgment". The metaphor of the knocking door fits perfectly.
As an idea for the first byte: Could one simply send an empty html Comment as first line?
<!---->
This would have no Influence on the following render process whatsoever (as opposed to sending a single '<'), so it should be safe to send without any regard to the rest of the Content Rendering process.
https://validator.w3.org/ accepts this as valid HTML
<!---->
<!DOCTYPE html>
<html lang="de">
<head>
<title>test</title>
</head>
<body>
</body>
</html>
This would be a little more data than only one byte, but i think the big advantage here is the abundance of any side effects
Has anyone managed to stream data from Nuxt? This is crucial for long pages that load large amounts of data that can't be easily cached.
There are some great ideas here, but let's coalesce this into https://github.com/nuxt/nuxt/issues/4753.
Problem
When using SSR, especially with API calls that are fetched on server, Time to First Byte (TTFB) is quite high due to nothing being sent until the entire page is rendered. TTFB is very important for LightHouse scores, which are used by Google to determine page speed and therefore search rankings.
The irony here is that SSR was developed to improve SEO, yet because of the TTFB issue it hurts it in practice.
Solution
Nuxt should send anything it can, as quickly as possible. Most of the page is dynamic, meaning it can't be sent until further server-side rendering has occurred – but surely even a single byte can be sent? It is "Time to First Byte" after all. Perhaps it would be safe enough to send
<!doctype html>
? Or even send a single space if possible.Similar Issues
4753 mentions this issue, but focuses on prefetching. My understanding is that this would help with links, but not direct access to the Nuxt site by URL or by GoogleBot.