Closed sbrl closed 5 years ago
Fixed :D :D :D
To elaborate, add_parser()
takes a new argument: a function that should generate the cache id hash. Said hash generator function should take a single argument of the source text to produce a hash of (bearing in mind that it could be a page's content, or a comment, or something else).
If we enable parser caching and include another page as a template with the
{{page name | params }}
syntax, the cache for including pages is not invalidated. This leads to the awkward issue where stale cache entries can potentially end up being served.Due to the way the new caching system works, it's not easy to detect the cache files we need to delete, either. The new caching process does the following:
Said hash is comprised of the following:
What we need to do is include a list of timestamps of all the included pages in the computation of the hash. That way, the hash would change if the content of any of the included pages change.
Doing so in a manner that still preserves our modularity will be somewhat challenging though, as the include syntax is a feature specific to the
parser-parsedown
module. To this end, I think what we're after is a way to specify a custom hashing function when registering a parser. This way, the parser can hash the content in its own way, and then Pepperminty Wiki can hash the hash to mix in some additional information afterwards.As for the hashing function itself, I think we can probably do something simple like a
preg_match
against the templating syntax, pull out the page names, andarray_map
them into their associated timestamps.