Minify currently assumes new file content will arrive with a fresh mtime. This makes it fast, but misses file changes in some situations.
Essentially we want to monitor md5(content) of each source. This is relatively slower than checking timestamps, so we'll make up for it by giving the cache a max-age and only revalidating after that period. Nearly every request will be faster because we only have to check the cache's timestamp.
I think each cacheId (mapping to a URL) will have four files:
PT and GZ: plaintext and gzipped versions of the output, ready for readfile()
MD1: a file containing MD5s of each source.
MD2: a file containing the last time the MD5s were checked, and the ETag and Last-Modified headers to be sent if this file is fresh.
On each request, load MD2. If fresh, send the ETag/Last-Modified and the content (or 304). If MD2 is not fresh, check all sources against MD1, potentially regenerating all caches, or just up the timestamp in MD2.
This is fine if getting an MD5 is cheap, but some Minify configurations use non-files as sources, and we don't want to be fetching source every few seconds. Not sure how to deal with that.
Minify currently assumes new file content will arrive with a fresh mtime. This makes it fast, but misses file changes in some situations.
Essentially we want to monitor md5(content) of each source. This is relatively slower than checking timestamps, so we'll make up for it by giving the cache a max-age and only revalidating after that period. Nearly every request will be faster because we only have to check the cache's timestamp.
I think each cacheId (mapping to a URL) will have four files:
On each request, load MD2. If fresh, send the ETag/Last-Modified and the content (or 304). If MD2 is not fresh, check all sources against MD1, potentially regenerating all caches, or just up the timestamp in MD2.
This is fine if getting an MD5 is cheap, but some Minify configurations use non-files as sources, and we don't want to be fetching source every few seconds. Not sure how to deal with that.