Open greghunt opened 3 years ago
Since the fold is almost arbitrary and can change from request to request, we need a better way of determining what the critical css is. One option may be to consider that the maximum size to inline critical css is 14kb so why not simply inline as much as we can instead of determining exact viewports.
In this way, we can potentially sort css rulesets by their element's position on the page and include as many that will fit within a certain size, like 14kb. The element's position may change considerably on mobile as well, so we still need a good way of handling positions for different resolutions.
Since we absolutely must have a browser of some sort render our HTML to determine the position of elements. We have two choices:
If we take for granted that 2 is the better path, we can lean more on the client browser to do more of the processing for us. Initially, we begun by minimizing the payload to the server and processing in background. This minimized the frontend impact of our AJAX script, and the load time of the page. However, once we determine from this first payload wether or not we've cached this page yet, we'll need to get the full HTML source again in order to process it. There's two ways of doing this:
These two options are not very ideal and would lead to poor performance. Also, it may be challenging to do the CSS parsing and manipulation server-side with PHP. This may also be error prone and out of sync with modern browsers.
The solution at this point is to simply leverage the client side more up-front. More tests need to be made, but parsing and sorting all of the CSS rules, elements, and their orders is considerably simpler, and more accurate than doing it server side. Also, the increased time to execute or our script should be negligible.
To make a fair comparison, we assumed that the server side response would initiate a background process so we won't have to wait for the response either way. So we simply logged the ajax request to a file on the server in both cases. Now we can compare just the time to process the payload from the client-side.
In the initial case of logging only the element coordinates and a small amount of page meta data, the average response/execution time was anywhere between 20-30ms
.
In the second case where we offload the processing to the client side where we parse all CSS styles, get their corresponding element coordinates, and sort the result by y coordinate, the average response/execution time was anywhere between 30-40ms
.
Our conclusion is that the 10ms difference is negligible especially considering all the advantages it also brings:
In order to split our CSS files, we need a way to determine their relative priorities.
The challenge here is that the fold for our Critical CSS depends on the viewport which can vary almost infinitely. So we need a way of determining a safe zone that would encompass it. This means we must store the viewport information in the Logger to analyze what the safe fold actually is.