Before this PR: The SnapshotTransaction class is responsible for handling too much, including management of bare read sentinels after a user data table has been manipulated non-atomically (e.g. truncation, or when restoring from backup).
There is also a bug with the original implementation, though it is probably benign given that this bug manifests very loudly when it is relevant. The current implementation for handling read sentinels basically acknowledges that bare sentinels, or sentinels only covered by uncommitted values, are fair game to be treated as empty. (Notice that this should never happen by the Atlas write protocol: you only put a sentinel when you delete something from sweep, and failed transactions are generally gotten rid of using direct deletes, not range tombstones).
The code for handling this, however, used a null check on the return value of a TransactionService. In other words, suppose some cell has a bare sentinel. Now suppose that we attempt to write to that cell, but abort. All future reads of that version of that cell will read the aborted version, go through post filtering, see the sentinel, think that it is not a bare sentinel (because there's some timestamp -> -1 in the txn table) and panic. But it is a bare sentinel, just that the code does not suggest as such!
After this PR:
==COMMIT_MSG==
SnapshotTransaction no longer throws when reading a bare sentinel covered by an aborted value, which can happen post-table truncation or restores if subsequent transactions fail.
==COMMIT_MSG==
Also SnapshotTransaction has fewer responsibilities, though the former is probably more changelog-relevant.
Priority: High P2
Concerns / possible downsides (what feedback would you like?): Nothing in particular.
Is documentation needed?: No.
Compatibility
Does this PR create any API breaks (e.g. at the Java or HTTP layers) - if so, do we have compatibility?: Don't think so
Does this PR change the persisted format of any data - if so, do we have forward and backward compatibility?: No
The code in this PR may be part of a blue-green deploy. Can upgrades from previous versions safely coexist? (Consider restarts of blue or green nodes.): Yes
Does this PR rely on statements being true about other products at a deployment - if so, do we have correct product dependencies on these products (or other ways of verifying that these statements are true)?: Not that I'm aware of
Does this PR need a schema migration? No
Testing and Correctness
What, if any, assumptions are made about the current state of the world? If they change over time, how will we find out?: Nothing specific
What was existing testing like? What have you done to improve it?: Extended existing tests to use shiny JUnit5 parameterized tests for "how a value can be uncommitted", and wrote new unit tests for the small piece extracted.
If this PR contains complex concurrent or asynchronous code, is it correct? The onus is on the PR writer to demonstrate this.: Not much concurrency...
If this PR involves acquiring locks or other shared resources, how do we ensure that these are always released?: N/A
Execution
How would I tell this PR works in production? (Metrics, logs, etc.): Nothing breaks! The specific situation is so rare that I don't know if we can meaningfully test it.
Has the safety of all log arguments been decided correctly?: Enum types only.
Will this change significantly affect our spending on metrics or logs?: No.
How would I tell that this PR does not work in production? (monitors, etc.): Something breaks
If this PR does not work as expected, how do I fix that state? Would rollback be straightforward?: Rollback
If the above plan is more complex than “recall and rollback”, please tag the support PoC here (if it is the end of the week, tag both the current and next PoC):
Scale
Would this PR be expected to pose a risk at scale? Think of the shopping product at our largest stack.: No
Would this PR be expected to perform a large number of database calls, and/or expensive database calls (e.g., row range scans, concurrent CAS)?: No
Would this PR ever, with time and scale, become the wrong thing to do - and if so, how would we know that we need to do something differently?: Not that I'm aware of
Development Process
Where should we start reviewing?: Either the AbstractSnapshotTransaction test changes or the ReadSentinelTest
If this PR is in excess of 500 lines excluding versions lock-files, why does it not make sense to split it?: This could be done, but the PR isn't dramatically high priority, and a good chunk is just moved from SnapshotTransaction to the separate class.
Please tag any other people who should be aware of this PR:
@jeremyk-91
@sverma30
@raiju
What do the change types mean?
- `feature`: A new feature of the service.
- `improvement`: An incremental improvement in the functionality or operation of the service.
- `fix`: Remedies the incorrect behaviour of a component of the service in a backwards-compatible way.
- `break`: Has the potential to break consumers of this service's API, inclusive of both Palantir services
and external consumers of the service's API (e.g. customer-written software or integrations).
- `deprecation`: Advertises the intention to remove service functionality without any change to the
operation of the service itself.
- `manualTask`: Requires the possibility of manual intervention (running a script, eyeballing configuration,
performing database surgery, ...) at the time of upgrade for it to succeed.
- `migration`: A fully automatic upgrade migration task with no engineer input required.
_Note: only one type should be chosen._
How are new versions calculated?
- ❗The `break` and `manual task` changelog types will result in a major release!
- 🐛 The `fix` changelog type will result in a minor release in most cases, and a patch release version for patch branches. This behaviour is configurable in autorelease.
- ✨ All others will result in a minor version release.
SnapshotTransaction no longer throws when reading a bare sentinel covered by an aborted value, which can happen post-table truncation or restores if subsequent transactions fail.
**Check the box to generate changelog(s)**
- [x] Generate changelog entry
General
Before this PR: The SnapshotTransaction class is responsible for handling too much, including management of bare read sentinels after a user data table has been manipulated non-atomically (e.g. truncation, or when restoring from backup).
There is also a bug with the original implementation, though it is probably benign given that this bug manifests very loudly when it is relevant. The current implementation for handling read sentinels basically acknowledges that bare sentinels, or sentinels only covered by uncommitted values, are fair game to be treated as empty. (Notice that this should never happen by the Atlas write protocol: you only put a sentinel when you delete something from sweep, and failed transactions are generally gotten rid of using direct deletes, not range tombstones).
The code for handling this, however, used a null check on the return value of a TransactionService. In other words, suppose some cell has a bare sentinel. Now suppose that we attempt to write to that cell, but abort. All future reads of that version of that cell will read the aborted version, go through post filtering, see the sentinel, think that it is not a bare sentinel (because there's some timestamp -> -1 in the txn table) and panic. But it is a bare sentinel, just that the code does not suggest as such!
After this PR:
==COMMIT_MSG== SnapshotTransaction no longer throws when reading a bare sentinel covered by an aborted value, which can happen post-table truncation or restores if subsequent transactions fail. ==COMMIT_MSG==
Also SnapshotTransaction has fewer responsibilities, though the former is probably more changelog-relevant.
Priority: High P2
Concerns / possible downsides (what feedback would you like?): Nothing in particular.
Is documentation needed?: No.
Compatibility
Does this PR create any API breaks (e.g. at the Java or HTTP layers) - if so, do we have compatibility?: Don't think so
Does this PR change the persisted format of any data - if so, do we have forward and backward compatibility?: No
The code in this PR may be part of a blue-green deploy. Can upgrades from previous versions safely coexist? (Consider restarts of blue or green nodes.): Yes
Does this PR rely on statements being true about other products at a deployment - if so, do we have correct product dependencies on these products (or other ways of verifying that these statements are true)?: Not that I'm aware of
Does this PR need a schema migration? No
Testing and Correctness
What, if any, assumptions are made about the current state of the world? If they change over time, how will we find out?: Nothing specific
What was existing testing like? What have you done to improve it?: Extended existing tests to use shiny JUnit5 parameterized tests for "how a value can be uncommitted", and wrote new unit tests for the small piece extracted.
If this PR contains complex concurrent or asynchronous code, is it correct? The onus is on the PR writer to demonstrate this.: Not much concurrency...
If this PR involves acquiring locks or other shared resources, how do we ensure that these are always released?: N/A
Execution
How would I tell this PR works in production? (Metrics, logs, etc.): Nothing breaks! The specific situation is so rare that I don't know if we can meaningfully test it.
Has the safety of all log arguments been decided correctly?: Enum types only.
Will this change significantly affect our spending on metrics or logs?: No.
How would I tell that this PR does not work in production? (monitors, etc.): Something breaks
If this PR does not work as expected, how do I fix that state? Would rollback be straightforward?: Rollback
If the above plan is more complex than “recall and rollback”, please tag the support PoC here (if it is the end of the week, tag both the current and next PoC):
Scale
Would this PR be expected to pose a risk at scale? Think of the shopping product at our largest stack.: No
Would this PR be expected to perform a large number of database calls, and/or expensive database calls (e.g., row range scans, concurrent CAS)?: No
Would this PR ever, with time and scale, become the wrong thing to do - and if so, how would we know that we need to do something differently?: Not that I'm aware of
Development Process
Where should we start reviewing?: Either the AbstractSnapshotTransaction test changes or the ReadSentinelTest
If this PR is in excess of 500 lines excluding versions lock-files, why does it not make sense to split it?: This could be done, but the PR isn't dramatically high priority, and a good chunk is just moved from SnapshotTransaction to the separate class.
Please tag any other people who should be aware of this PR: @jeremyk-91 @sverma30 @raiju