Open jeremyroman opened 2 years ago
Hello. We have an extension to the speculation rules syntax to allow the referrer policy of a speculative request to be set explicitly. A key use case for this is to allow a site with a lax referrer policy to adopt cross-site prefetching by using a strict policy specifically for the prefetch.
Explainer: https://github.com/WICG/nav-speculation/blob/main/triggers.md#explicit-referrer-policy Spec: https://wicg.github.io/nav-speculation/speculation-rules.html
(Note that as of this writing, the most recent version of the spec hasn't yet been published at that link, but should be available soon.)
Please take a look.
Just dropping a note here to mention that Chrome is planning an experiment with an expanded subset of this feature soon, notably including document rules, an HTTP response header as an alternative to using an inline <script>
, and integration with a proposed PerformanceResourceTiming.deliveryType to enable authors to determine whether the navigation was served from the prefetch cache.
Thanks!
hi @jeremyroman, I have a question - I was trying to collect usage from Chrome and some sample URLs and I do see percentage of page loads over time in https://chromestatus.com/metrics/feature/timeline/popularity/3932 showing over 4%, but then the "Adoption of the feature on top sites" section is empty, as well as returning empty results when I run the sample query at the bottom in BigQuery:
#standardSQL
SELECT yyyymmdd, client, pct_urls, sample_urls
FROM `httparchive.blink_features.usage`
WHERE feature = 'SpeculationRules'
ORDER BY yyyymmdd DESC, client
Do you know what's going on with this data?
Hi @bgrins, HTTP Archive relies on a crawl that takes a while to update and doesn't cover all the cases that we see with Chrome in the wild. In particular, I doubt the HTTP Archive includes any results from search engines, which is one of the primary use cases for speculation rules. That said, it does look like there are a few URLs listed now:
But Google.com is also definitely using this feature and is perhaps the most interesting case for you to look at. Jeremy probably missed the GitHub notification for your comment but feel free to reach out to us (eg. rbyers@chromium.org / jbroman@chromium.org) if there's anything we can do to help with testing and evaluation.
Apologies for the slow response and thanks @RByers for giving a quick summary. We tried to internally look into what exactly is and isn't covered by the archive data that feeds this, and weren't sure whether the Google mobile SERP (which as Rick mentions is one of the largest users in the wild) was intentionally not indexed, was missed due to some quirk of the data collection methodology (for example, the User-Agent string), or something else.
We didn't find a great answer to that before the holidays (and then I subsequently fell ill) but Rick's answer covers the overall conclusion.
Hi. We have delta updates on how the speculation rules should interact with Content Security Policy.
Explainer: https://github.com/WICG/nav-speculation/blob/main/triggers.md#content-security-policy
We added Content Security Policy
section to clarify how the speculation rules interact with existing Content Security Policy, and explain the new source keyword "inline-speculation-rules
".
We also added Content Security Policy
section to the speculation rules spec, in order to explain the motivation and to show spec patches for Content Security Policy.
Spec (diff): https://storage.googleapis.com/spec-previews/WICG/nav-speculation/pull/245/diff/speculation-rules.html
In short, we clarify how the speculation rules are handled in CSP, and provide a new source keyword to permit safe inline speculation rules without allowing unsafe inline script under the strict CSP environment. Here is an example use.
<meta http-equiv="Content-Security-Policy" content="script-src 'inline-speculation-rules'">
<!-- this just works!! -->
<script type="speculationrules">
...
</script>
<!-- this causes a CSP violation -->
<script>
console.log('hello.');
</script>
Hello. We have an extension to the speculation rules syntax to explicitly set a No-Vary-Search hint on a speculative request. The hint is useful because prefetches that depend on No-Vary-Search header to match to navigations do not benefit the user if the navigation happens before prefetch headers return from the server. Using the hint, the web browser will wait for a matching in-flight prefetch and will expect, but verify, that the No-Vary-Search hint matches the No-Vary-Search header. If the No-Vary-Search hint does not match the No-Vary-Search header received then the web browser will send a new request to the server.
Explainer: https://github.com/WICG/nav-speculation/blob/main/triggers.md#no-vary-search-hint Spec: https://wicg.github.io/nav-speculation/speculation-rules.html No-Vary-Search header request for position: https://github.com/mozilla/standards-positions/issues/717
(Note that as of this writing, the most recent version of the spec hasn't yet been published at that link, but should be available soon.) Please take a look.
Hi, we're expanding the syntax for speculation rules to allow developers to specify the target_hint
field.
This field provides a hint to indicate a target navigable where a prerendered page will eventually be activated. For example, when _blank
is specified as a hint, a prerendered page can be activated for a navigable opened by window.open()
. The field has no effect on prefetching.
<script type=speculationrules>
{
"prerender": [{
"target_hint": "_blank",
"urls": ["page.html"]
}]
}
</script>
<a target="_blank" href="page.html">click me</a>
Please see the explainer for the motivation of this extension.
Explainer: https://github.com/WICG/nav-speculation/blob/main/triggers.md#window-name-targeting-hints Spec: https://wicg.github.io/nav-speculation/speculation-rules.html Chrome platform status: https://chromestatus.com/feature/5162540351094784
We think the use case is good and the user experience when a prerendered page is navigated to seems compelling.
We struggle a lot with the JSON syntax and the introduction of a new query language. We have Selectors, and it seems possible to add URL patterns to Selectors.
Furthermore, the rebuttals for <link>
in the explainer can be solved without jumping to JSON syntax. The explainer even says that new rel values are possible. Since "prefetch" was changed to a subresource prefetch, now the terminology is confusing even with speculation rules vs link rel. Why not use nav-prerender
and nav-prefetch
as new rel values? And make them work on <a>
also, to enable prerendering or prefetching a specific link, where all of the existing attributes can apply (like target
, referrerpolicy
, etc)? Listing multiple URLs can be a new space-separated or comma-separated attribute (like srcset).
Example:
<link rel="nav-prerender" selector=":link-href('/*'):not(:link-href('/logout?*'), .no-prefetch *)">
<link rel="nav-prefetch" hrefset="next.html, next2.html" requires="anonymous-client-ip-when-cross-origin" referrerpolicy="no-referrer">
...
<a href="other.html" target="_blank" rel="nav-prerender">other</a>
The motivation for external speculation rules is "it would be convenient", but that could apply to any HTML metadata? Is it necessary to support external rules?
(Filed https://github.com/WICG/nav-speculation/issues/307 for the above.)
The cost of mispredicting a prerender (i.e. fetching, parsing, and rendering without using) looks to be significant in terms of CPU time and network bandwidth. Is there a story for minimizing wasted prerenders, if the feature is widely adopted in the future?
In addition to negative performance impact from wasted prerenders, another downside to prerendering is that it increases complexity to the entire platform by adding a special mode (DelayWhilePrerendering) that can affect the behavior of every other API for both implementers and authors. This is fundamental to the design of having prerender actually load the page.
Prefetch with subresources sidesteps this complexity entirely, so it's a pretty compelling option from that standpoint. I would like to know more about this comment, and your experience with NoState prefetch more generally https://github.com/WICG/nav-speculation/blob/main/prerendering-same-site.md#prefetching-with-subresources:
Based on some initial performance testing, Chromium found that prefetching with subresources was a bad middle ground for our users: it would result in significantly more resource consumption, but only slightly faster loads, compared to main-document prefetching.
How much more resource consumption, and how much faster were the loads compared with (a) normal prefetch and (b) no speculation at all? What are the same numbers with the new prerendering feature? And what specifically is causing the performance difference between NoState and prerendering - is it mostly executing the page load itself, follow-on requests within the new document that aren't speculated, something else?
Thanks @zcorpan and @bgrins for chiming in! It seems like there are a few issues here.
First, we want to reemphasize that there are at least three separate efforts here, and a clear signal from you would be valuable on all of them:
Navigational prefetch. This could be triggered by the browser (e.g. typing in the URL bar, hovering over bookmarks, etc.). Or it could be triggered by the web developer, using technologies like speculation rules or your new <link>
proposal.
It sounds like there aren't significant concerns with this technology in itself? (Setting aside specific trigger mechanisms.) If so, that would be valuable for us to confirm, as with Mozilla's support, that means we could start upstreaming https://wicg.github.io/nav-speculation/prefetch.html into HTML. Note that even if the trigger is browser UI, we still need clear specifications for how this works, e.g. what special HTTP headers are sent with prefetching, or how it impacts navigation timing APIs.
Navigational prerendering. Again, this could be triggered in multiple ways.
It sounds like there is some support here in terms of the impact on user experience, but there are concerns about efficiency and implementation complexity. We'll discuss more below.
Again, if this is something Mozilla supports upstreaming into HTML apart from the specific triggering mechanism, that'd be really great to know.
Speculation rules JSON syntax. It sounds like you have definite concerns here. Let's discuss at https://github.com/WICG/nav-speculation/issues/307.
The cost of mispredicting a prerender (i.e. fetching, parsing, and rendering without using) looks to be significant in terms of CPU time and network bandwidth. Is there a story for minimizing wasted prerenders, if the feature is widely adopted in the future?
Regarding the performance impact of prerendering, it's important to keep in mind that a typical web page loads many other web pages that are never interacted with: e.g., ads in iframes. Because of how prerendering delays the loading of cross-origin iframes in a prerendered page, and because of how layout and painting can (if the browser wishes) be delayed until activation, we find that the typical cost to a user's CPU and bandwidth of a prerendered page is about the same order as a single iframe. So although we certainly want to be cautious, it's good to keep the scale of the problem in mind.
How prefetch and prerendering use of the HTTP cache also helps reduce waste. Typically, for same-origin prerenders, many assets are reused (site-wide CSS, JS, logos…etc.), so the additional extra cost is due to the document itself (usually quite small), and media (often lazy loaded). Even when a speculation is not used, it can still help prime the cache for future usage so is often not completely wasted (for example if product A is prerendered, but product B is then navigated to instead).
One reason we're excited about prerendering as a first-class technology is because it is under the control of the user agent. Without prerendering in the browser, if a page author wants an instant experience, they need to "prerender" by converting their application to a single-page app, and then manually rendering the next page's content offscreen and doing a DOM swap. This is very complicated, so you only see it on highly-resourced SPAs. But also it's opaque to the user agent.
With MPA-based navigational preloading, the browser gets to be in complete control of prerender eligibility. For example, Chromium prevents prerendering when the user is in Battery Saver mode, Data Saver mode, under memory pressure, or just chooses to disable preloading through their settings pages. There are also automatic limits on how many prerenders can be ongoing. The user agent is also well positioned to implement more sophisticated triggering heuristics that can improve precision (which would minimize wasted resources) while still having recall/lead times that make preloading viable and useful. And finally, prerendering processes can be intentionally down-prioritized in scheduling algorithms. When pages are doing "prerendering" themselves, they are often not so conscientious about the user's resources.
Similar reasoning holds for navigational prefetching, by the way: compared to websites manually using fetch()
or <link rel="prefetch">
to prime the HTTP cache, navigational prefetching can be more explicitly optional, since we know the web developer's intent.
This is also a reason we've so far held the line against exposing the status of an ongoing prerender. We want to make it relatively hard for pages to take a dependency on a prerender: it should always be possible for the user agent to deny a prerender request, or evict a prerendered navigable. (That said, we've gotten repeated requests to expose such state from web developers; see e.g. https://github.com/WICG/nav-speculation/issues/306 and https://github.com/WICG/nav-speculation/issues/162. We're hoping that most of the motivation for such requests will be solved by automatically falling back from prerender to prefetch.)
So yes, we think there's a pretty good story for minimizing wasted prerenders :).
In addition to negative performance impact from wasted prerenders, another downside to prerendering is that it increases complexity to the entire platform by adding a special mode (DelayWhilePrerendering) that can affect the behavior of every other API for both implementers and authors. This is fundamental to the design of having prerender actually load the page.
Yes, the implementation complexity of prerender is substantial. We've found the developer excitement and movement of metrics it generates to be worthwhile, but we understand the reluctance. We're happy to talk more through the challenges we've faced as part of this if that'd be helpful. But in the meantime we're hopeful that in our role as the first implementer, by writing an exhaustive spec and set of tests, we can ease the burden on second-onward implementations.
How much more resource consumption, and how much faster were the loads compared with (a) normal prefetch and (b) no speculation at all? What are the same numbers with the new prerendering feature?
Figuring out which of this data I can share exactly is tricky. But let me quote some of our public numbers, as well as a bit of new information I was able to find about the NSP-vs-prefetch live experiment.
Comparing NSP vs. prefetch for a journey from the Google Search results page to a search result, we found that NSP tripled the byte cost for a +14% relative improvement in LCP impact. So instead of implementing NSP for the top single result, they chose to implement prefetch for the top two results, which gave a better hit rate. This is what made us stop pursuing prefetch-with-subresources, as it didn't give instant page loads like prerender, but we had evidence from at least one large data set that a simple prefetch was a better tradeoff.
For other customers working with prerender, like Google Search itself (e.g. google.com -> search results page), the resource usage of prerender was found to be negligible compared to prefetch. Part of this is as discussed above, comparing a prerender to an iframe: once you're already paying the server-side costs of generating the main response body, running the JS during idle time and fetching cacheable subresources is not a big deal.
Looking at our TPAC 2023 presentation, you'll see that for pages using speculation rules on Android, prefetched pages had 0.9x LCP, and prerendered pages had 0.72x LCP. Speculation rules usage has increased by ~10x since then so these numbers might not still be accurate, but hopefully they are helpful.
For one large external partner deployment that has happened since then, prefetched pages had 0.7x LCP, and prerendered pages had 0.3x (down to 216 ms at on desktop / 265 ms on Android at P50).
(If you have more specific questions that could be answered with data, I can try to see if there is more we could dig up and get approval for sharing. However, the process is somewhat heavyweight, so I'd appreciate it if we saved that for cases that would truly make a difference in Mozilla's priorities.)
Note however that, separate from any concerns about resource usage, we are seeing some requests from partners to bring back prefetch-with-subresources: https://github.com/WICG/nav-speculation/issues/305. The basic summary is that navigational prefetch is not exciting enough---the core capability is a more user-respectful version of <link rel=prefetch>
, which also works even on pages that are not HTTP-cacheable. Whereas, prerender is sometimes hard to adopt in the presence of many third-party resources, e.g. analytics scripts that are not yet prerender-aware. So we might be exploring prefetch-with-subresources over the coming quarter as well.
And what specifically is causing the performance difference between NoState and prerendering - is it mostly executing the page load itself, follow-on requests within the new document that aren't speculated, something else?
It depends on the site. For simple static sites, the difference is mostly in things like setting up the new process, creating all the global JavaScript objects, and initial layout and paint. For complex SPAs, it's a lot of JavaScript processing and subresource fetching, and especially the extra server round trips to get the main content. You can kind of answer this question for any given site by taking a trace for that site and removing all the statically-discoverable resource fetching time. Or, at least for some sites with good caching headers, you can note that an NSP prefetch-powered load is basically the same as a normal warm-HTTP-cache load, which is usually not the same sort of instant (<200 ms) experience you get with prerender.
Request for Mozilla Position on an Emerging Web Specification
Other information
https://github.com/mozilla/standards-positions/issues/613#issuecomment-1235040904 :