openresty / srcache-nginx-module

Transparent subrequest-based caching layout for arbitrary nginx locations.
http://wiki.nginx.org/NginxHttpSRCacheModule
476 stars 105 forks source link

A queue for multiple simultaneous request to the same content. #53

Closed didaka closed 2 years ago

didaka commented 8 years ago

I am trying to use srchache + redis as a replacement of varnish as srcache gives the opportunity of having shared cache. However, there is one thing that I am missing. Let me describe a simple scenario.

We have 1000 concurrent users on the front page. We need to flush the redis cache for some reason. When the cache is empty each of the concurrent request for same content (same cache key) will be sent to the backend which might result in beckend failure.

It would be great is srcache support some queue for multiple requests to the same content just like proxy_cache, so the load to the backed is reduced.

@see: https://www.nginx.com/resources/admin-guide/content-caching/#slice “Sometimes, the initial cache fill operation may take some time, especially for large files. When the first request starts downloading a part of a video file, next requests will have to wait for the entire file to be downloaded and put into the cache.”

agentzh commented 8 years ago

@didaka I think it's already possible with internal redirects, the ngx_lua module, and the lua-resty-limit-traffic Lua library for ngx_lua. The pseudo configuration is like this (untested though):

location / {
    srcache_fetch ...;
    echo_exec @jump;
}

location @jump {
    srcache_store ...;
    rewrite_by_lua_block {
        -- call resty.limit.conn's incoming() method with the key of the $uri?$args, for example.
    }
    log_by_lua_block {
        -- call resty.limit.conn's leaving() method
}

Basically we use echo_exec to initiate an internal redirect to the named location @jump in case of cache misses (otherwise srcache_fetch serves the response and terminates the request directly). And in the named location, we use the ngx_lua module and the resty.limit.conn module of lua-resty-limit-traffic to limit the backend concurrency level by queueing the excessive concurrent backend requests. And finally, we use srcache_store to save the backend content.

Of course, an ideal solution is to add such support to ngx_srcache. If you're interested in contributing a patch, then I'm glad to review and merge this feature.