Open adam-lynch opened 1 year ago
@adam-lynch
cleanupOutdatedCaches
has the problem of a hard refresh on client side, we're using stale while revalidate strategy (html pages on cache with old assets versions, missing from server), so if there is a new version app will stop working (404; deploy a new app version will remove old assets from server) cleanupOutdatedCaches
, you will end up with storage quota problems, it will depend on app sizehttps://developer.chrome.com/docs/workbox/modules/workbox-precaching/#clean-up-old-precaches
Thanks @userquin. From that link:
This obsolete data shouldn't interfere with normal operations, but it does contribute towards your overall storage quota usage, and it can be friendlier to your users to explicitly delete it. You can do this by adding cleanupOutdatedCaches() to your service worker, or setting cleanupOutdatedCaches: true if you're using one of Workbox's build tools to generate your service worker.
What happens when the limit is reached? I would've assumed that the oldest files will be evicted from cache to make space, but I'm not sure now.
Our app is pretty large.
@adam-lynch in theory, workbox-build will try to update changed assets, if the quota is reached, browser can remove assets without any logic, so beware assuming what resources/assets will be evicted.
https://developer.chrome.com/docs/workbox/understanding-storage-quota/
https://love2dev.com/blog/what-is-the-service-worker-cache-storage-limit/
on quota error the sw will no install/activate the new sw and so your new app version (or maybe unpredictable behavior, it will depend on user agent and/or OS): you should also deal with it, maybe adding error callback and showing a message/toast as suggested in preview comment (first link).
Thanks for this, v helpful. I assume I'll need to switch to injectManifest
to add an expiration plugin / custom expiration logic.
maybe adding error callback and showing a message/toast as suggested in preview comment (first link).
I don't understand, which article you're referring to?
Upps sorry, it was here https://web.dev/i18n/en/storage-for-the-web/#over-quota
EDIT: You can use plugins with generateSW
strategy, you don't need to switch to build your own service worker
check this: https://github.com/vitest-dev/vitest/blob/main/docs/vite.config.ts#L86
Beware with opaque responses (CORS)
It looks like immutable
may have fixed my issue so I don't need to do this.
I did yet another review of our headers and found:
index.html
and sw.js
were being cached. I updated those to no-cache
. I don't think this caused any real issue 🤔 Content-Type
. I don't think this had any real effect./assets/**
's Cache-Control
header from public, max-age=31536000
to public, max-age=31536000, immutable
.Now, I can't make the issue happen no matter what I try.
Maybe it could still happen in some browsers (e.g. iOS Safari wouldn't surprise me), so to be safe if our React app doesn't mount, I programmatically delete the entire cache in cache storage that the service worker uses, unregister all service workers, and reload. (The code for this is just in a <script>...</script>
in the index.html
)
Also, I'm not going to remove the old assets (before this point in time) for a while because when users requested them, they were not immutable
(even though if they are now).
Beware with opaque responses (CORS)
I reviewed our cache storage and it looks fine, thanks.
@adam-lynch I'm adding a new entry in deploy section https://github.com/vite-pwa/docs/pull/19
@adam-lynch can this be closed?
yeah
I've seen @userquin and the docs say to be careful when setting
cleanupOutdatedCaches
tofalse
. Why? What should we be aware of?Background: we ran into an issue somewhat like #177 causing blank screens and we found that keeping the old files on the server was the only way to fix it. This is very awkwardly accomplished by keeping the assets in a separate GitHub repository. Now we've ran into a size limit so we've had to start deleting the oldest of those assets and we're concerned that infrequent users will run into this issue again (we do a lot of deploys). So, I'm glad we don't need to do this if we set
cleanupOutdatedCaches
tofalse
instead.