helmgast / lore

A web-based storytelling framework.
4 stars 0 forks source link

Static delivery of Lore #307

Open ripperdoc opened 2 years ago

ripperdoc commented 2 years ago

We can use Nginx to cache pages, but should avoid to cache certain dynamic parts. Can we use SSI for this?

https://nginx.org/en/docs/http/ngx_http_ssi_module.html

https://www.nginx.com/resources/wiki/start/topics/examples/dynamic_ssi/

https://stackoverflow.com/questions/38000435/nginx-ssi-independent-fragment-caching

https://dev.to/ale_ukr/how-to-make-nginx-cache-cookie-aware-2ffl

https://www.innoq.com/de/blog/nginx-ssi-env/

Need to test how caching works for the template page plus the individual fragments.

ripperdoc commented 2 years ago

Caching logic. We have two main considerations to make:

They boil down to setting a time that is the maximum acceptable "time to publish", e.g. the time it would take after a change before users can see it. We are used to everything being fresh so this time might be short. On the other hand, it's pretty unlikely that a user sits on a very recent but now invalid copy. An author might accept it takes an hour before all users sees the change to their article, and most users might not access it within the hour anyway.

This time-to-publish setting would be used as expiration time for all private and downstream public caches. For our own Nginx cache we can cache more aggressively, because we can do invalidation (through proxy_cache_bypass, which means we can send an HTTP request that updates the cache, or through accessing the nginx cache directly and deleting the content). There is still a risk that this invalidation fails or that it's not called because the content indirectly depends on some other change and therefore isn't invalidated when the change happens. So we should set an expiry for Nginx that we could accept happening, if invalidations fails.

Even if the cache time has expire, we might not need to send the file again. By setting Last-Modified-By, we can just respond to the client with an empty 304 response and nothing needs to be processed.

So we might have a final cache parameters for each template / URL like this:

In current Helmgast site, we would have to do only private caching on everything where a user could be logged in, which is virtually all of the site, as the URL remains the same even if logged in. Public caching here would mean users seeing other random users pages or names. If we could refactor the user menu and edit menu into Javascript, we could offer the same content in the HTML response but still display things differently.

ripperdoc commented 2 years ago

To optimize further, we should ensure we get Last-modified-by as easily as possible (one db roundtrip), but it should also be capped by the last time the source code changed. Otherwise we risk the problem that new content wont be fetched because the article is still unchanged in DB but the template has changed.

ripperdoc commented 2 years ago

We can use SSI to fetch private parts like user menu separately while still caching the main page. The benefit here would be that there is no conversion to javascript needed and no delay for user. https://stackoverflow.com/questions/38000435/nginx-ssi-independent-fragment-caching