Closed tanner0101 closed 5 years ago
Not seeing the expected results. CircularBuffer
is ~40% slower.
Story on Linux is similar, ~40% slowdown:
testPerformance()
expected: 0.115 seconds
measured: 0.185
I guess popping/pushing the last element of the available
array largely avoids memory allocations, anyway, so that should be fast enough. Or is this expected to be a bottleneck?
@MrMage the problem with the current method is the pool operates as a stack instead of a queue. Ideally we'd have FIFO behavior so that requests for a connection can't be starved by heavy usage.
@tanner0101 but how would the underlying buffer implementation change that behavior? If you mean with FIFO behavior that "the first to request a connection should receive the next available connection", I don't think that behavior relates to the actual implementation of the buffer structure.
The choice of a ring buffer only ensures (I think) that all connections get used with roughly the same frequency (instead of mostly using just one connection), which shouldn't make a difference in practice.
The problem is that using array as a queue will result in O(n)
performance. You need to either:
With circular buffer, you get:
But what's the problem with doing append+popLast on the array? Both have O(1) complexity.
On 14. Feb 2019, at 20:55, Tanner notifications@github.com wrote:
The problem is that using array as a queue will result in O(n) performance. You need to either:
prepend O(n) + popLast O(1) append O(1) + popFirst O(n) With circular buffer, you get:
prepend O(1) + removeLast O(1) (amortized) append O(1) + removeFirst O(1) (amortized) — You are receiving this because you were mentioned. Reply to this email directly, view it on GitHub, or mute the thread.
@MrMage Because that results in FILO, which while it works, will use some connections a lot while leaving other ones untouched. We just want to balance the load across all the connections.
Thank you for the elaboration. But what is the tangible benefit of balancing the load across all connections?
On 15. Feb 2019, at 13:09, Caleb Kleveter notifications@github.com wrote:
@MrMage Because that results in FILO, which while it works, will use some connections a lot while leaving other ones untouched. We just want to balance the load across all the connections.
— You are receiving this because you were mentioned. Reply to this email directly, view it on GitHub, or mute the thread.
@MrMage @calebkleveter balancing the connections is not the problem actually. It's the requests for connections that are being stored here, and that should be FIFO. Imagine this case:
@tanner0101 that makes more sense, but as I explained in https://github.com/vapor/nio-kit/pull/25#issuecomment-463734852 that problem is independent of the choice of buffer structure; instead you need a queue (or ring buffer) structure for the waiting requests, not the available connections.
Thank you for catching that @MrMage.
I've updated the PR now with FIFO for waiters as it should have been. Performance actually improves slightly which is great since this is technically a bug fix.
I've also added a test to ensure waiters are fulfilled FIFO.
Results on Linux are even better:
[PERFORMANCE] testPerformance() expected: 0.115 seconds
Test Case 'ConnectionPoolTests.testPerformance' measured average: 0.088
Hey @tanner0101, you just merged a pull request, have a coin!
You now have 1093 coins.
Changes
ConnectionPool
storage fromArray
toCircularBuffer
.