Closed splitice closed 11 years ago
Hello!
On Wed, Mar 6, 2013 at 5:22 AM, splitice wrote:
Basically I run a cross datacenter slave system. PUT requests require writing to a master server which can be up to 100ms in latency away. Our system depends on low latency serving of requests.
Does srcache_store, when waiting on a response block the request from being outputted?
No. The original response's data chunks get emitted as soon as they arrive. srcache_store just copies and collects the data in an output filter without postponing them from being sent downstream.
But please note that even though all the response data will be sent immediately, the current Nginx request lifetime will not finish until the srcache_store subrequest completes. That means a delay in closing the TCP connection on the server side (when HTTP keepalive is disabled, but proper HTTP clients should close the connection actively on the client side, which adds no extra delay or other issues at all) or serving the next request sent on the same TCP connection (when HTTP keepalive is in action).
I've added this explanation to the documentation for srcache_store:
http://wiki.nginx.org/HttpSRCacheModule#srcache_store
Thanks for asking :)
Best regards, -agentzh
Thank you this sounds amazing perfect for situations where the master is in a sepperate data center.
Ok firstly im sorry if this is documented anywhere, I couldn't find it. Its a question regarding the way srcache_store works, which may turn into a feature request.
Basically I run a cross datacenter slave system. PUT requests require writing to a master server which can be up to 100ms in latency away. Our system depends on low latency serving of requests.
Does srcache_store, when waiting on a response block the request from being outputted?