Closed jamespic closed 7 years ago
@jamespic Thanks for the update. Would it be too much trouble to ask to run the same test on 2.4.2-RC2? While I don't expect a significantly different result from 2.4.1, there were some minor changes in the ConcurrentBag hotpath and I would be curious about the effect, if any.
Absolutely. Here are three runs with 2.4.2-RC1:
And for comparison, ArrayBlockingQueue:
The results look interesting to me. My instinctive guess was that the removal of the hasQueuedThreads
check around the fast-path acquire would make it less fair, but I can't see a clear winner. It might be fun to try sticking some instrumentation in, to see which paths are being used, or to come up with an objective measure of fairness.
Of course, the test is designed to maximise the odds of the releasing thread barging, so it'll highlight even fairly minor unfairness. In most real-world applications, the releasing thread wouldn't try to immediately re-acquire a connection.
@jamespic Thanks for the update. Very interesting! What Java version, and what OS did you run the harness on?
This was on Oracle Java 1.8.0_25, Windows 7, Intel Core i7. Haven't tried it under Linux yet.
@jamespic I made some minor changes in my own fork. The results are more consistently fair, at least on my iMac (Intel Core i5) with Java 1.8.0_60, but every few runs I get a skewed result.
Also running with:
export MAVEN_OPTS=-Xmx4g -Xms4g
mvn exec:java -Dexec.mainClass="com.trustiv.barge.Barge"
to avoid garbage collection during the run.
EDIT: I'm going to make a modification to use the actual HikariDataSource with a stub DataSource/Connection.
Yes, using a full HikariDataSource is likely to be more realistic, and I'd suspect reduce the amount of unfairness - the more clock cycles are elapsed between releasing and acquiring connections, the less chance for barging.
And in truth, unfairness isn't that big a deal. Especially since AbstractQueuedLongSynchronizer unparks threads fairly, so in most highly contended cases (where all waiters are parked) there'll be minimal unfairness (and my experience with real applications is that Hikari's contended performance is much more predictable than c3p0).
So I'd think that you might as well maximise throughout, and ignore fairness. Or if fairness is important, add an optional mode that's completely fair at the expense of throughout (i.e, no acquires without going through the synchronizer, synchronizer calls hasQueuedPredecessors).
@jamespic I thought I'd provide an update on this. I re-ran your fairness harness with the 2.6.0 ConcurrentBag and this is the result:
Longest unfair run was 1004 (out of 500,000)
Kolmogorov-Smirnov test of startTimestamps has p-value 1.3482803762343565E-10
Looks almost like a single line.
If you zoom in on various parts you can find the differences, but overall I'm pretty happy with the increased fairness while at the same time increasing performance under contention.
I've updated my fork of your original harness.
The wiki includes a page on queue fairness, based on a blog post I wrote. I believe the page was accurate at the time it was posted, but I no longer stand by my conclusions, partly due to performance optimisations between the HikariCP 2.3 and 2.4 branches.
HikariCP 2.4 introduced a new synchronizer in ConcurrentBag. This new synchronizer is undoubtedly faster, but this comes at the expense of fairness. This is borne out by tests - I re-ran the tests with HikariCP 2.4.1, which produced the following graph, showing the divergence between the order in which borrow attempts are made, and satisfied.
Based on this, I'd no longer support the assertion that HikariCP is "nearly fair".
It's also been my experience that re-running the test with older versions of HikariCP on newer versions of Oracle JDK leads to less fairness - I'd speculate they've added some sort of thread affinity on compareAndSet, which would make "nearly fair" implementations less fair.