Closed schlosna closed 1 month ago
Type
Description
Proposed change is a clear improvement, without adding any complexity, and probably the best way to go. That said, I wonder if we should consider Cliff Click's non-blocking concurrent hash map, downside is that values may be re-computed by multiple threads in races rather than acquiring a lock and executing the mapping function at most once.
In the case of NonBlockingHashMap
re-computations, we would pay the cost of Paxos round(s) to construct the timestamp client, so would want to understand the tradeoffs there. I'm assuming in-memory blocking for same key is going to be cheaper than Paxos round, but it would be an interesting test.
The case where I suspect we hit these bottlenecks most frequently is when timelock services are upgrading and leadership transitions to a freshly started node, so all of the timelock namespace clients must be initialized quickly as timelock consumers transition from previous leader to new leader. A larger, separate change might be to consider pre-initializing namespace clients via something like gossip so that there is less of a stampede on handoff.
We should consider sticking the constant (number of expected clients) somewhere so when we change it we don't forget to update everything (though one thing to note is the difference in # of clients depending on stack type)
I consolidated this into TimelockNamespaces
as int estimatedClients()
.
Released 0.1089.0
General
Before this PR:
Timelock service is seeing contention on startup when initializing timelock clients.
From https://github.com/openjdk/jdk/blob/master/src/java.base/share/classes/java/util/concurrent/ConcurrentHashMap.java#L348-L349
After this PR:
==COMMIT_MSG== Reduce startup contention in Timelock ConcurrentHashMaps ==COMMIT_MSG==
Priority:
Concerns / possible downsides (what feedback would you like?):
Is documentation needed?:
Compatibility
Does this PR create any API breaks (e.g. at the Java or HTTP layers) - if so, do we have compatibility?:
Does this PR change the persisted format of any data - if so, do we have forward and backward compatibility?:
The code in this PR may be part of a blue-green deploy. Can upgrades from previous versions safely coexist? (Consider restarts of blue or green nodes.):
Does this PR rely on statements being true about other products at a deployment - if so, do we have correct product dependencies on these products (or other ways of verifying that these statements are true)?:
Does this PR need a schema migration?
Testing and Correctness
What, if any, assumptions are made about the current state of the world? If they change over time, how will we find out?:
What was existing testing like? What have you done to improve it?:
If this PR contains complex concurrent or asynchronous code, is it correct? The onus is on the PR writer to demonstrate this.:
If this PR involves acquiring locks or other shared resources, how do we ensure that these are always released?:
Execution
How would I tell this PR works in production? (Metrics, logs, etc.):
Has the safety of all log arguments been decided correctly?:
Will this change significantly affect our spending on metrics or logs?:
How would I tell that this PR does not work in production? (monitors, etc.):
If this PR does not work as expected, how do I fix that state? Would rollback be straightforward?:
If the above plan is more complex than “recall and rollback”, please tag the support PoC here (if it is the end of the week, tag both the current and next PoC):
Scale
Would this PR be expected to pose a risk at scale? Think of the shopping product at our largest stack.:
Would this PR be expected to perform a large number of database calls, and/or expensive database calls (e.g., row range scans, concurrent CAS)?:
Would this PR ever, with time and scale, become the wrong thing to do - and if so, how would we know that we need to do something differently?:
Development Process
Where should we start reviewing?:
If this PR is in excess of 500 lines excluding versions lock-files, why does it not make sense to split it?:
Please tag any other people who should be aware of this PR: @jeremyk-91 @sverma30 @raiju