Closed CaptainCodeman closed 7 years ago
Improved it by serving most of the static files directly from AppEngine's edge cache / CDN, much better results and the entire things runs easily on a single F1 instance (some of the others above were on F2's and having ever-file needing to be served by the instances meant more instances used).
https://www.webpagetest.org/result/170807_1X_12ZC/
It still works for non-AppEngine users as well.
Also, I'm thinking the different caching approach above could be optional - i.e. push combined dependencies based on the route or push dependencies based on each file requested (current prpl-server-node behavior).
Repo is here: https://github.com/CaptainCodeman/prpl-server-go Example app: https://github.com/CaptainCodeman/prpl-server-example
I created the related issue https://github.com/Polymer/polymer-build/issues/260, because I didn't understand what you were trying to do. So if I understand correctly (pardon my ignorance):
Push dependencies based on the route and don't push anything for the dependent requests (to avoid duplicating pushed files)
You're doing Route based pushing instead of page-element.html
based pushing.
Given the URL: https://prpl-dot-captain-codeman.appspot.com/view2
You are pushing the whole dependency tree including my-view2.html
This reduces the amount of roundtrips to 0 because the server pushes everything the browser needs.
Service worker does it's thing and caches all other elements.
Service worker intervenes and no network requests have to be made.
~If the client does not support service workers it will have to fetch all dependencies for a new route, because nothing gets pushed for the new view?~ It will get the es6-bundled
items
Yes, that's a good description.
As it was previously, the webcomponents-loader and app-shell weren't requested until the initial page had been sent and parsed by the client (so first latency introduced) then it needed the shell to be parsed and the client-side routing to kick in to make the request for the associated view fragment (more latency introduced).
Although it requires the extra URL route-to-fragment mapping to work, it doesn't feel like much to add and it seems to improve that first request.. It could always fall-back to the original push-dependencies-of-files behaviour. Of course it especially benefits browsers that support http/2 but not service workers.
The other part it adds is to move everything closer to the user via edge caching / CDN - if requests take 50ms to respond and don't need to hit the server then it's going to be better than if they take 500ms (because caching is disabled).
@CaptainCodeman The changes in #48 are released in 0.10.0
. It adds a default caching header, and adds some facilities for further customization. WDYT?
Hey @aomarks @FredKSchott et al., Polymer/polymer-cli#283 seems related to this. Is there any coordination that should happen between any ongoing efforts between these two? New to both codebases but happy to try to help in any way I can if there's anything I can do!
@aomarks isn't there a risk that a client can get an incomplete and incompatible "set" of files? (they won't all be necessarily updated in lock-step with intermediate and browser caches). Having a low cache timeout reduces the window of risk but also reduces the effectiveness of the caching, making it more likely that the request needs to go back to the origin.
I went with the single app-level version which allows everything to be invalidated as a set vs the individual file hashes that would otherwise be needed as the latter would require all the references for every HTML Import to be updated which would probably result in lots of things changing anyway so diminishing returns for a big increase in complexity.
Yes, by default an update to a site that has content mutable URLs now creates a 60 second window where file versions could be out of sync.
The reason I added a default max-age=60
was because I found that pushed resources won't actually get used with max-age=0
, since the browser considers them immediately stale even within one page load, so push was effectively useless by default. It seems that we want a default window that's high enough to make push effective for one page load, but low enough to minimize the out-of-sync problem.
If you have content immutable URLs via cache busting directories/filenames/URL parameters, then it definitely makes sense to bump Cache-Control
up to a very high value. #48 adds the ability to set the default header, as well as completely override caching by setting the header before the middleware executes.
Closing because I think our default behavior is now more reasonable, and we have a mechanism for implementing custom caching behavior when desired.
Some thoughts on PRPL server after doing some experiments ...
All the pieces of PRPL are great but it sometimes feels like they're pulling in slightly different directions with how things are currently setup:
Here's some things I've experimented with to try to address some of these issues and improve performance / avoid @slightlyoff from finding my website and telling me off ...
dontCacheBustUrlsMatching: /./
insw-precache-config.js
so that it will use the files loaded in the first request"/view1": "src/my-view1.html"
push-manifest.json
entries for the fragment, the app-shell (and adding entrypoint deps such as webcomponents-loader)Cache-Control: public, max-age=31536000, immutable
)It seems to work based on my testing so far and seems to make things faster / lighter. Here's the before:
https://www.webpagetest.org/result/170805_GE_YR8/
In the browser: 194 requests / 274Kb transferred | Finish 1.66s
and the after:
https://www.webpagetest.org/result/170805_MA_132X/
In the browser: 91 requests / 93.7Kb transferred | Finish 752ms
Test site: https://prpl-dot-captain-codeman.appspot.com/ (apologies if it says Quota Exceeded, it means I might need to put some more coins in the meter).
Caveats: this was all with Go, AppEngine and the basic Polymer Starter Kit - results with a larger app and / or node may be different.
Anyway, interested to hear people's thoughts on whether this makes sense or if there's something obvious I've missed. I know there's probably going to be situations where too much could be pushed which could be detrimental (e.g. the app-shell loading is delayed) but I'm assuming that can be addressed by ordering / prioritizing what things are pushed first.