Closed 2aces closed 1 year ago
absolutely! :-)
but does this work when using page cache plugin and when AO is set to deliver static files? because in that case you might not have any PHP to start with?
All tests were done on WPengine (which uses a custom varnish implementation, I think) with and without Cloudflare. I guess their cache keeps the headers sent by the original PHP page in the static version.
I will check on other hosts and let you know the results ASAP.
So I read up on the topic a bit :-) and found this interesting article on smashing mag. about preloading, which focuses primarily on the HTML-based preloading (but also mentions the HTTP header-approach).
I think adding the preload both as HTTP resp. header and in the HTML (link rel=preload
) would ensure that even in a fully static setup (where headers are not cached) the resources would be loaded over the existing HTTP/2 connection (@daveros does the same, actually)?
So far, we use a custom plugin to each site, aggregating all functions, AO filters and techniques for optimization, including different resources hints ( https://www.w3.org/TR/resource-hints/ ) for different resources to ensure maximum browser support, depending on the resource importance, origin, position and how sure we are we will use a given resource:
<link rel="preconnect" href="https://api.tiles.mapbox.com">
<link rel="dns-prefetch" href="https://api.tiles.mapbox.com">
<link rel="preload" as="script" crossorigin href="https://api.mapbox.com/mapbox.js/plugins/leaflet-omnivore/v0.2.0/leaflet-omnivore.min.js">
<link rel="preconnect" href="https://api.mapbox.com/">
<link rel="dns-prefetch" href="https://api.mapbox.com/">
<link rel="preload" as="style" href="https://mydomain.com/path-to-ao-cache/ao-aggregated-style.css">
preconnect
to suggest browsers which support it to initiate connection.preconnect
ignore this one if the connection on line 1 was made (all resource hints are suggestions... er, hints... and UA decides if it will process it) .preload
it in browsers that support it. Note that we need both the crossorigin
andas
parameters if said resource is external.preload
, same way like lines 1 and 2.Since last week, we're shipping our plugins with our send header functions just with preload for the most important resources. That's what "triggered" our suggestion, because we thought it would be great for other AO users. :-)
I guess AO would work great with just preload
on headers and inline, but if you want to have wider browser support, use preconnect
and dns-prefetch
as well. This should be filterable/optional because it will work great most of time, but sometimes it won't, depending on the aggregated file size, original CSS rules, dom complexity, etc
great work, interesting stuff! looking forward to your contributions!!
Some interesting details here: https://blog.yoav.ws/being_pushy/
@zytzagoo interesting article indeed
If you guys need anything, just send me a Short message and I can Look into it :-) ;-)
On Friday, 12 August 2016, 2ACES notifications@github.com wrote:
@zytzagoo https://github.com/zytzagoo interesting article indeed
— You are receiving this because you are subscribed to this thread. Reply to this email directly, view it on GitHub https://github.com/futtta/autoptimize/issues/49#issuecomment-239301859, or mute the thread https://github.com/notifications/unsubscribe-auth/ARjhk8fr6H0knFE1nvjsvAxHR3qruc04ks5qe5cHgaJpZM4JRNhx .
Best Regards
Ylia Callan
WEB SWIFT SEO Tips - Tools - Techniques
Can https://github.com/futtta/autoptimize/blob/30e3986e5da98fb472a613b22f78c18d86bae685/autoptimize.php#L267 be changed to pass in the URL for the cached + minified CSS and JS? That way we can do
function http2_server_push($content, $cached_js, $cached_css) {
header(
sprintf(
'Link: <%s>; rel=preload; as=%s',
$cached_js, 'script'
)
)
header(
sprintf(
'Link: <%s>; rel=preload; as=%s',
$cached_css, 'style'
)
)
}
add_filter('autoptimize_html_after_minify', 'http2_server_push')
well, I think at the very least we should hook into wordpress' send_headers action hook to avoid sending headers out of order.
Anyway the workaround I have for now is
function http2_server_push($content) {
$header = "Link: ";
if (preg_match('#="([^"]+/js/autoptimize_[0-9a-f]+\.js)"#', $content, $matches)) {
$header .= sprintf(
'<%s>; rel=preload; as=%s,',
$matches[1], 'script'
);
}
if (preg_match('#="([^"]+/css/autoptimize_[0-9a-f]+\.css)"#', $content, $matches)) {
$header .=
sprintf(
'<%s>; rel=preload; as=%s',
$matches[1], 'style'
);
}
header($header);
return $content;
}
add_filter('autoptimize_html_after_minify', 'http2_server_push');
Seems to work a bit on my blog https://www.trajano.net/. I see the JS, CSS being loaded as soon as possible when I check the network graph.
Nice! It does work; your AO JS is linked at the end of the HTML, with defer attribute, but it is indeed loaded immediately as per this webpagetest.org test which also shows that the initial request has this as header;
link: <https://www.trajano.net/wp-content/cache/autoptimize/js/autoptimize_7f1fb7f2c06f6c4218428fe4c1904176.js>; rel=preload; as=script,<https://www.trajano.net/wp-content/cache/autoptimize/css/autoptimize_dc58df74105fec34d124e8ddef6f0210.css>; rel=preload; as=style
the only thing which I can't deduct from the waterfall chart is that the preloaded CSS/ JS isn't render-blocking (it shouldn't as per the specs, obviously).
If I were you I would install a page cache plugin to minimize my TTFB, would be interesting to see if such a plugin also caches headers?
I would agree with the page cache. I just never got around to doing it, I've tried a few before (like years ago) but they had some issues on a very limited memory machine and running on Oracle Linux with SELinux on full blast. It may be better now, but I never invested time on it.
However for one thing I don't like having to wait on the cache, I would rather it change as I change things. Since my blog is more for play around rather than a heavily utilized site.
What I would like to know is if it were at all possible to send the headers ASAP then the content because it looks like it processes the whole page first. But then a cache would likely help there.
it simply has to process the page first, as:
so yeah, I would go the page cache route :-)
Tried using WP Super Cache (had it's share of issues with permissions and what not) but I got it working in the end. I lose the headers now :(
Content is faster though.
https://www.webpagetest.org/result/160821_V0_F2D/2/performance_optimization/#first_byte_time
I guess (hope) there must be page caching plugins that also cache headers ...
Here's a slightly better one that I am using, it will scan through and preload all JS, CSS PNG and JPGs that are found in the content. I kind of want to remove the second regexp, but got lazy :)
$header = "Link: ";
$regexp = '#(src|href)="([^"]+\.(js|css|png|jpg)(\?[^"]+)?)"#';
if (preg_match_all($regexp, $uncompressed_file_data, $matches, PREG_SET_ORDER)) {
foreach ($matches as $match) {
$file = $match[2];
$type = $match[3];
if ($type === 'js') {
$type = 'script';
} else if ($type === 'css') {
$type = 'style';
} else {
$type = 'image';
}
$header .= sprintf('<%s>; rel=preload; as=%s,', $file, $type);
}
}
$regexp = str_replace('"', "'", $regexp);
if (preg_match_all($regexp, $uncompressed_file_data, $matches, PREG_SET_ORDER)) {
foreach ($matches as $match) {
$file = $match[2];
$type = $match[3];
if ($type === 'js') {
$type = 'script';
} else if ($type === 'css') {
$type = 'style';
} else {
$type = 'image';
}
$header .= sprintf('<%s>; rel=preload; as=%s,', $file, $type);
}
}
header(rtrim($header, ","));
@2aces
There are two preloads around and it can cause confusion.
The W3 link you linked is not a push preload. It is just an instruction to the browser to fetch a resource with highest priority (and not execute it or use a code onload).
This can be done even if there is no HTTP2.
The other preload is of course pushing assets to the browser.
For server push you need two things: the Link
header and a server such as nghttp2 that can parse the Link
header and start sending. I haven't gotten it to work with nginx yet.
Some thoughts:
Push optimisation is the best when the critical css is pushed and the html doesn't have any inline/critical css. It's a separate file.
Pull preload for the full css file will be as good as the push css.
It's pushing the critical css which can make the difference.
As Ilya Grigorik says:
n fact, if you have ever inlined a resource (CSS, JS, or an image), you've been "simulating" server push: an inlined resource is "pushed" as part of the parent document. The only difference is that HTTP 2.0 makes this pattern more efficient and far more powerful! ... HTTP 2.0 server push obsoletes inlining.
https://www.igvita.com/2013/06/12/innovating-with-http-2.0-server-push/
So if it's something to be pushed ... it's the critical css, not the full css.
In other words, the best optimization is:
Just to give some sanity check, we are not talking about only HTTP2 push header anymore, right? If so, maybe we should change the Issue title and description.
What is sure is that every setup will have different demands and outcomes. I mean:
Specifically about preload AND HTTP2 push, we gotta be careful as @zytzagoo pointed out, it may result in overhead for subsequent page visits . In my specific test setups on WPEngine with sites with small diferences, it was worth anyway.
Build on what @vijayaraghavanramanan listed as the best optimization:
PS: @trajano your code looks efficient for this, as soon as I am able, I will test it.
About using WordPress class: it doesn't support preload right now. I think the best course of action would be expanding it for preload and HTTP2 push headers and if works good, we propose it to merge it on core.
@2aces,
Correct.
Only Chrome/Opera support preload in stable and Firefox is building it but not in Nightly yet. My points were incomplete.
So I should say my points should read:
I mentioned not pushing the full css as one should push as less as possible. So html and critical css can load fast if that's the case.
@vijayaraghavanramanan :
"for browsers which do not support preload, polyfill it with Autoptimize's existing js." My understanding of preload draft specification and for all tests I conducted is that pushing the CSS using http2 push doesn't mean it is inserted in the DOM and parsed, it is only downloaded. Therefore, we need AO javascript even when the browser and server supports pushing headers.
"for these browsers also use a sessionstorage variable to optimize on second load, i.e, load the full css immediately in the head on repeat website view." Can you elaborate?
@2aces
Right. It's just downloaded. But it allows what you can do onload. So that's why I said onload in point 3 in my previous comment.
So you can declare it like this:
<link rel="preload" href="http://www.example.com/wp-content/cache/autoptimize/css/autoptimize-hash.css" as="style" onload="preloadFinished(this)">
and before it define a function in javascript
<script>
function preloadFinished( el ) {
...
}
</script>
or pass href
instead of this
About sessionstorage, what I meant was that once a visitor visits a site the css file is already in the cache. So for repeat views, you can do better than loading css in the footer. The browser now doesn't need to fetch the css from the server as it is in the cache.
So in the footer, add this line in javascript:
sessionStorage.fullaocssloaded = "true";
And in the header,
<script>
if (! relpreloadsupport) {
if (sessionStorage.fullaocssloaded ) {
//javascript to insert the full css immediately.
}
}
</script>
What the above code does is that it checks if there's a sessionStorage variable. If true, it means almost certainly that the css is in the cache. So why not load it in the head.
It's slightly more complicated as some browsers do not allow sessionStorage in incognito mode.
This is the full javascript code. It use requestAnimationFrame, but you can use Autoptimize's lCSS instead.
The code is a collection of various snippets at various points in the HTML, not to be used next to each other.
function preloadFinished(node) {
var res = document.createElement("link");
res.rel = "stylesheet";
res.href = node.href;
node.parentNode.insertBefore( res, node.nextSibling );
}
var linkSupportsPreload = function() {
try {
return document.createElement("link").relList.supports("preload");
} catch (e) {
return false;
}
};
var sessionStorageAvailable = function() {
var mod = 'modernizr';
try {
sessionStorage.setItem(mod, mod);
sessionStorage.removeItem(mod);
return true;
} catch (e) {
return false;
}
};
var cb = function() {
var links = document.getElementsByTagName("link");
for (var i = 0; i < links.length; i++ ) {
var link = links[i];
if( link.rel === "preload" && link.getAttribute( "as" ) === "style" ) {
preloadFinished(link);
}
}
}
if( !linkSupportsPreload() ) {
if( (sessionStorageAvailable() && sessionStorage.fullcssloaded) || !sessionStorageAvailable() ) {
cb();
}
}
var rAF = (function() {
return window.requestAnimationFrame || window.mozRequestAnimationFrame || window.msRequestAnimationFrame || window.oRequestAnimationFrame || window.webkitRequestAnimationFrame || function( callback ) {
window.setTimeout(callback, 1000 / 60);
}
})();
if ( !linkSupportsPreload() ) {
if( sessionStorageAvailable() && !sessionStorage.fullcssloaded ) {
setTimeout(function() {
rAF(cb);
sessionStorage.fullcssloaded = "true";
});
}
}
Btw in select cases, it makes sense to just have the full css pushed and block render with the full css and not have critical/above the fold css at all.
It won't have the problems of speed as the css arrives at the same time as the initial HTML, so rendering is fast as the usual "inline and defer"
But that's in the case where the full css is not a big file. If it's big, it makes sense to push the critical css file only.
I have a slightly improved optimization method below. The simplest solution is of course is to push the full css. But this is not optimal as its size can be greater than 14KB. Some Wordpress bundled themes have things such as Genericons which are huge and its not optimal to push them.
My improvement to the previous comments is that for second loads, you don't need critical css. The server might still push it for second loads (but some are thinking of improvements to cancel it from the client side) and it's still better to not use it and thus avoid repaints.
onload
if the page doesn't have sessionstorage variable. If the page has sessionstorage variable set to true, don't pull preload and instead insert style.min.css via javascript synchronously. (Browsers which don't support HTTP2 push see very old behaviour, don't enjoy "inline and defer". Minor sacrifice since most modern browsers do support HTTP2 push).
My 2c:
Hi Frank,
Yeah, It's just catching up and would imagine only few percent of server fully supporting it. This is because in addition to supporting push, they also need to be HTTPS. Push can work without HTTPS but browsers have implemented it so that it can work only in HTTPS. So inlining critical css as AO default makes sense.
About my code, I took it from Modernizr but doesn't depend on it. So alternatively one can use this from Mozilla's site:
function storageAvailable(type) {
try {
var storage = window[type],
x = '__storage_test__';
storage.setItem(x, x);
storage.removeItem(x);
return true;
}
catch(e) {
return false;
}
}
if (storageAvailable('localStorage')) {
// Yippee! We can use localStorage awesomeness
}
else {
// Too bad, no localStorage for us
}
with sessionStorage instead of localStorage.
I had a recent finding with HTTP2 Server Push, but it may just be a Chome bug/limitation. https://trajano.net/2017/01/double-downloads-with-http2-server-push/ if you preload and the resource is not dynamically added using scripts in Chrome you will get a double download.
However, given that we can probably do an optimization where the CSS is added to the DOM by the script and asynchronously loaded. With HTTP/2 Server Push the CSS can be preloaded in the background while the initial DOM is being processed and then bound later by the script.
weird that you got
The resource … was preloaded using link preload
while you were not preloading via a link but via the HTTP response header? or were you doing both?
Either one will yield the same problem. Doing in Link header will just download a bit more data sooner I presume because of Server Side push. Again I think it could be a Chrome implementation issue, because I do not see anything in the spec that states that it needs to be loaded via script. But then again it could be worded differently.
I have seen those sort of Chrome warnings but it's usually because something is not done right.
This is a good page and I don't get any warning here on Chrome:
@trajano Very interesting approach, I want to use it. Would you share the most recent version of your code?
@vijayaraghavanramanan i think it a proper way to validate is to find out whether the a resource was sent via H2 AND downloaded as part of the webpage request either via Link header or some other way. By doing the H2PushResource
it may tell Apache to explicitly send the resource over the wire but they could just be sent without being attached and redownloaded again.
I don't have a updated to the latest version of Cache-Enabler, I did a diff and the key functions are below for wp-content/plugins/cache-enabler/inc/cache_enabler_disk.class.php
private static function endsWith($haystack, $needle) { $length = strlen($needle); return (substr($haystack, -$length) === $needle); } private static function _link_header($uncompressed_file_data) { $header = ""; $regexp = "#'((https?:)//[^']+/[^/']+\.js(\?[^']+)?)'#"; if (preg_match_all($regexp, $uncompressed_file_data, $matches, PREG_SET_ORDER)) { foreach ($matches as $match) { $file = $match[1]; - $type = 'script'; $header .= sprintf('<%s>; rel=preload; as=%s,', $file, $type); } } $regexp = str_replace("'", '"', $regexp); if (preg_match_all($regexp, $uncompressed_file_data, $matches, PREG_SET_ORDER)) { foreach ($matches as $match) { $file = $match[1]; if (self::endsWith($file, '/html5.js')) { continue; } $type = 'script'; $header .= sprintf('<%s>; rel=preload; as=%s,', $file, $type); } } $regexp = '#(src|href)="([^"]+\.(css|png|jpg)(\?[^"]+)?)"#'; if (preg_match_all($regexp, $uncompressed_file_data, $matches, PREG_SET_ORDER)) { foreach ($matches as $match) { $file = $match[2]; $type = $match[3]; if ($type === 'css') { $type = 'style'; } else { $type = 'image'; } $header .= sprintf('<%s>; rel=preload; as=%s,', $file, $type); } } $regexp = str_replace('"', "'", $regexp); if (preg_match_all($regexp, $uncompressed_file_data, $matches, PREG_SET_ORDER)) { foreach ($matches as $match) { $file = $match[2]; $type = $match[3]; if ($type === 'css') { $type = 'style'; } else { $type = 'image'; } $header .= sprintf('<%s>; rel=preload; as=%s,', $file, $type); } } return rtrim($header, ","); }
So open question; is HTTP2-pushing (almost) all resources (js, css, images) a good idea? Or should one rather push those resources that are needed for the initial page rendering?
If you can somehow push those that are part of the theme (i.e. scripts, CSS) that would be better. However, they will only work when the data is loaded via script rather than part of the source. https://trajano.net/2017/01/double-downloads-with-http2-server-push/
If HTTP/2 is going to be enabled somehow it should defer the scripts and CSS
I am not sure if that double downloads issue is fixed on the current versions on Chrome though.
Another approach I can think of is to NOT do anything automatically to determine what should be pushed. Instead let the theme or blog developer do it themselves by having <link rel="preload" ...
tags in their content. These tags will then be parsed and stripped off so it will be sent via the HTTP Headers instead. The purpose of having them on HTTP Headers rather than the content is for HTTP2 Server Push compatible servers it should start sending the data ASAP before the client had requested it.
How is support for HTTP/2 push now, which servers support that out of the box?
@trajano you don't seem to HTTP2-push on your website (cfr. https://www.webpagetest.org/result/170404_BD_E26/1/details/#waterfall_view_step1)?
Not anymore, I found that it didn't work too well in Chrome. It was actually causing double downloads.
The main problem is Chrome will only "link" the content if it was built from a script rather than the HTML. So <img src=... />
will double download but setting .src
via JavaScript will not.
I still preload
a few things on my main portfolio https://trajano.net/ however, since they're all remote resources (i.e. google fonts etc) it won't be server push. Only those that are authorative can be server push.
re. double downloads; sounds vaguely related to https://github.com/filamentgroup/loadCSS/issues/110
Hey, @futtta . Celso Bessa, AO pt-br translator here.
As you know, HTTP2 push is a big deal in optimization now. We are getting great results in some of our projects with it and I think it would be great if AO included a mechanism to use those headers.
In some test setups we filtered AO final JS/CSS srcs and created our own headers using PHP/WP send headers ( https://codex.wordpress.org/Plugin_API/Action_Reference/send_headers ). this is not hard for us, but it might be not easy for an non technical user.
In other setups we used AO alongside http2 server push plugin by @daveros ), which is great, but it sends headers for all files in the original wp_enqueue queue but not for AO aggregated files (my guess is it uses a hook triggered before AO).
Looking this 2 cases, seems to me that having this on AO would help a lot of you user base.
What do you think? I won't be able to code anything for the next 4 weeks, but if you think it's a good feature, I can work on this in august.