Closed lidel closed 8 months ago
we should be caching only 2xx,3xx permanently (for IPFS only). maybe 15s for errors.
IPNS is being cached at verified-fetch layer (IIRC)
I've been investigating this and what the advantages of using the cache API would be over IndexedDB (which is already implemented but not used)
The following is a useful resource https://medium.com/dev-channel/offline-storage-for-progressive-web-apps-70d52695513c#.19w8r1c4o
Some initial insights:
@lidel Are there any specific benefits of the Cache API over indexedDB you are aware of?
@2color Thanks for looking into this. Indeed, shared quota is a bummer, but let's remember it applies only to Cache API, not individual block responses.
Cache API is still something we need. Cache API stores final bytes produced by IPFS stack, which means work necessary for turning IPFS DAG into final file occurs only once, and not on every page load for every image etc on the page.
Block level caching is useful when loading new subpath on a path we already visited, but iiuc we already get that from trustless-gateway.link responses being cached (responses have cache-control
header that tells browser to cache each block for as long as possible), so lower priority.
I think the priority is to set up Cache API, especially for /ipns/..
content path responses, and then fine to look into IndexedDB-based blockstore and see if it improves any metric.
Somewhat related, I discovered we don't set the right cache-control headers in @helia/verified-fetch.
afaik
Response
produced by service worker is NOT cached by the browser, unless it is explicitly put in the cache. Right now we don't do that, so page reload trigggers fetch of all chunks as blocks, which are returned from cache and re-assembled again, rather than the final result being returned from cache.If we add final Responses to cache, page reload will not trigger block requests for resources which were already handled.
Ref.