Because of this, both Unicorn and Pitchfork don't peroperly balance load between workers, unless the deployment is at capacity, the first workers will handle disproportionately more work.
For example, unning: wrk -c 4 -t 4 'http://localhost:8080/'
With the master branch and config:
listen 8080
worker_processes 16
Whos a big imbalance in the number of requests handled by each worker:
In some ways this behavior can be useful, but in other it may be undesirable. Most notably in can create a situation where some of the workers are only used when there is a spike of traffic, and when that spike happen, it hit colder workers.
To work around this issue, we can create multiple file descriptors for a single port, and limit each worker to a subset of the file descriptors. Linux will then round robin incoming requests between
Running the same benchmark with this branch and config:
The above example still doesn't do a perfectly fair load balancing, but that could be acheived by creating even more queues. The goal however isn't to have perfectly fair load balancing, simply to ensure every worker has a chance to do some minimal warmup.
Closes: https://github.com/Shopify/pitchfork/issues/71
Linux's epoll+accept queue is fundamentally LIFO (see a good writeup at https://blog.cloudflare.com/the-sad-state-of-linux-socket-balancing/).
Because of this, both Unicorn and Pitchfork don't peroperly balance load between workers, unless the deployment is at capacity, the first workers will handle disproportionately more work.
For example, unning:
wrk -c 4 -t 4 'http://localhost:8080/'
With the master branch and config:
Whos a big imbalance in the number of requests handled by each worker:
In some ways this behavior can be useful, but in other it may be undesirable. Most notably in can create a situation where some of the workers are only used when there is a spike of traffic, and when that spike happen, it hit colder workers.
To work around this issue, we can create multiple file descriptors for a single port, and limit each worker to a subset of the file descriptors. Linux will then round robin incoming requests between
Running the same benchmark with this branch and config:
Result:
The above example still doesn't do a perfectly fair load balancing, but that could be acheived by creating even more queues. The goal however isn't to have perfectly fair load balancing, simply to ensure every worker has a chance to do some minimal warmup.