palantir / atlasdb

Transactional Distributed Database Layer
https://palantir.github.io/atlasdb/
Apache License 2.0
46 stars 7 forks source link

Fix flakiness on poisoningMultipleTimesIsAllowed test #6974

Closed LucasIME closed 4 months ago

LucasIME commented 4 months ago

General

Before this PR: poisoningMultipleTimesIsAllowed test was flaky (example)

The reason why it happened is due to a race condition between InDbTimestampBoundStore.getUpperLimit and InDbTimestampBoundStre.storeUpperLimit.

getUpperLimit operates by setting the limit to 10_000 if one doesn't yet exists, or simply returning the existing limit if one is found.

storeUpperLimit always either updates the existing limit or create a new one if one doesn't exist.

Both methods are synchronized, but they can run in parallel with each other. As in, no 2 storeUpperLimit or 2 getUpperLimit can execute at the same time, but you can have 1 storeUpperLimit and 1 getUpperLimit running at the same time. (This was just me not know how synchronized works lol). But the race below can still happen since we have different store objects for each test, and they won't be synchronized against each other since they're different objects.

So the case for the race we're fixing here is as follows:

  1. poisonsEmptyTableAndReturnsStoredBound test starts and calls getUpperLimit
  2. Since initially there is no limit present, getUpperLimit doesn't return here and instead proceeds
  3. poisoningMultipleTimesIsAllowed starts running and calls store.storeUpperLimit(12000)
  4. Due to a race, poisoningMultipleTimesIsAllowed.storeUpperLimit write persists before the poisonsEmptyTableAndReturnsStoredBound.getUpperLimit is allowed to proceed with its write. (In memory expected limit = 12000, DB limit = 12000)
  5. poisonsEmptyTableAndReturnsStoredBound.getUpperLimit continues executing here and persists the default limit of 10_000). (In memory expected limit = 12000, DB limit = 10000)
  6. poisoningMultipleTimesIsAllowed test continues and calls store.getUpperLimit, but it crashes here due to expected in memory limit not matching the one that exists in the database.

After this PR:

==COMMIT_MSG== We now namespace each tests, creating a different table prefix for each of them so they don't clash with each other. ==COMMIT_MSG==

Priority:

Concerns / possible downsides (what feedback would you like?):

Is documentation needed?:

Compatibility

Does this PR create any API breaks (e.g. at the Java or HTTP layers) - if so, do we have compatibility?:

Does this PR change the persisted format of any data - if so, do we have forward and backward compatibility?:

The code in this PR may be part of a blue-green deploy. Can upgrades from previous versions safely coexist? (Consider restarts of blue or green nodes.):

Does this PR rely on statements being true about other products at a deployment - if so, do we have correct product dependencies on these products (or other ways of verifying that these statements are true)?:

Does this PR need a schema migration?

Testing and Correctness

What, if any, assumptions are made about the current state of the world? If they change over time, how will we find out?:

What was existing testing like? What have you done to improve it?:

If this PR contains complex concurrent or asynchronous code, is it correct? The onus is on the PR writer to demonstrate this.:

If this PR involves acquiring locks or other shared resources, how do we ensure that these are always released?:

Execution

How would I tell this PR works in production? (Metrics, logs, etc.):

Has the safety of all log arguments been decided correctly?:

Will this change significantly affect our spending on metrics or logs?:

How would I tell that this PR does not work in production? (monitors, etc.):

If this PR does not work as expected, how do I fix that state? Would rollback be straightforward?:

If the above plan is more complex than “recall and rollback”, please tag the support PoC here (if it is the end of the week, tag both the current and next PoC):

Scale

Would this PR be expected to pose a risk at scale? Think of the shopping product at our largest stack.:

Would this PR be expected to perform a large number of database calls, and/or expensive database calls (e.g., row range scans, concurrent CAS)?:

Would this PR ever, with time and scale, become the wrong thing to do - and if so, how would we know that we need to do something differently?:

Development Process

Where should we start reviewing?:

If this PR is in excess of 500 lines excluding versions lock-files, why does it not make sense to split it?:

Please tag any other people who should be aware of this PR: @jeremyk-91 @sverma30 @raiju

LucasIME commented 4 months ago

It looks like these are synchronized on the InDbTimestampBoundStore (object lock) though, which means that they actually shouldn't be able to execute at the same time for a given instance of the store?

That said the issue here (and the race you highlight) is still valid: but if I'm not mistaken the problem is that there are different instances of the store per test method, and so these acquire different object locks (the trace you identified doesn't require this, but it is actually possible to have concurrent calls to one of the methods if I'm not wrong).

Couple of small questions on cleanup and nextUniquePrefix, but this change makes sense. Thanks!

Lol, biggest TIL in a while. Took me almost 6 years of Java to learn today that I had misunderstood how synchronized methods work. Well, guess better late than never 😅