Closed krizhanovsky closed 7 years ago
Consider this recommendations and the author's blog post about proper implementation of HTTP pipelining.
Actually the definition of idempotent request is application specific(see my presentation, slides 6-8), e.g. GET /update?a=2
could be non-idempotent while POST request with Web-search parameters can be idempotent. So a configuration options like existing cache_methods
and caching policy
must be introduced to specify idempotent requests. Some DDoS mitigation techniques switches on caching of responses for non-idempotent requests (e.g. we can send the same responses to non-idempotent POSTs), but we still not pipeline them to avoid consistency issues on application side.
Please don't forget to create a task for @sergsever to create appropriate functional test. Verbose test logic description (what it should send and how verify the results) is appreciated.
Please consider this article about QoS. It has sense to add new configuration option for server group which specifies the queue size. If all the server connections are busy (tfw_server_queue_full()
returns true) we should put a request to the new queue of configured size. If srv_conn->qsize
decrements, then we can schedule next request from head of the new queue. If the new queue is full, then just do as currently - send error response to a client (however, it should be 50x response instead of 40x response.
Also please add sizes of introduced queues (and maybe some existing queues) to /proc/tempesta/perfstat
. The queue sizes are very important performance characteristics.
UPD Actually tfw_server_queue_full()
called in hash and rr schedulers already cares about full server queues.
Implemented in #660 (merge commit c40924b).
RFC 7230 6.3.2 prohibits pipelining of non-idempotent method requests. So if we receive such client request, then:
ss_send()
for them immediately;ss_send()
for waiting idempotent requests.See also Making HTTP Pipelining Usable on the Open Web.