esi / esi-issues

Issue tracking and feature requests for ESI
https://esi.evetech.net/
209 stars 23 forks source link

Add "stale-if-error" and "stale-while-revalidate" to "Cache-Control" HTTP response Header #1088

Open exodus4d opened 5 years ago

exodus4d commented 5 years ago

Feature Request

Adding proper stale-if-error and stale-while-revalidate directives to Cache-Control response Header would help the client to re-use cached response data in case of ESI errors:

A proper cache strategy for ESI already checks the Cache-Control response Header (e.g. Cache-Control: public, max-age=31536000. Etag and Last-Modified response Headers can be used to verify cache data with their counterparts If-None-Match and If-Modified-Since request Headers.

At some point, any cached data expires or needs to be verified... In case of ESI problems (errors), a client can not receive (or validate) new data.

Use case

Example response Header:

Cache-Control: max-age=604800, stale-if-error=259200, stale-while-revalidate=86400

Source

CarbonAlabel commented 5 years ago

Interesting suggestion, but what would be an actual use case for this? You're just describing how the directives are supposed to work, and there is little browser support for them.

On top of that, having an API which serves many clients with varying use cases dictate how they should behave in case of ESI errors might be a bit presumptuous.

exodus4d commented 5 years ago

OK, here is an example:

A wormhole mapper has to frequently pulling current pilot locations (current system) from ESI. If ESI becomes unstable/offline (due to downtime), these requests fail -> error. If ESI would have send a stale-if-error=900 (15min) with their responses (maybe 1-2min) right before an expected downtime (or always). The client Cache can store the stale-if-error=900 (and the max-age) information with the the response data. Then ESI goes offline (downtime), the next location request (after max-age expired), ends with e.g. (HTTP 5xx) or connection failure errors. The client cache analyses the bad response... and see that stale-if-error=900 info in prev cache value and can now use the previous cached location.

The current "work around" is to simply hard code the daily downtime and "skip" requests within a 10min interval and force the Cache to use the previous data.

Of course it is a "nice to have" and might not be needed on all endpoints. If a route provides a large may-age (1d) it might be unlikely that next request matches downtime interval.

Client Cache implementation

It know it is not trivial, the cache can no longer just concentrate on max-age and throw cached data away after expire, because data might be used on next failed request... But luckily there are HTTP Client like Guzzle which is used in all mayor PHP frameworks, which can be extended with custom Middlewares like Cache that supports stale-if-error and stale-while-revalidate Header data out of the box

Conclusion

In a perfect world where all endpoints send a stale-if-error=900 (15min) HEADER info, a warm cache will survive at least 15min and sends valid cached data. No matter when ESI goes offline.