Before this PR: The mid-workflow validation of our transaction history has a race condition, if certain dice are rolled very precisely:
task 0 must start validating the index state
pick some index number: we read the summary row and find that that index doesn't exist
while this is happening, another task writes the main value to that index
and then we read it and it's there!
After this PR:
==COMMIT_MSG==
Mid-workflow validation of the transaction history in the TransientRowsWorkflow is now transactional.
==COMMIT_MSG==
Priority: P2
Concerns / possible downsides (what feedback would you like?): None in particular
Is documentation needed?: No
Compatibility
Antithesis workflow server change
Testing and Correctness
What, if any, assumptions are made about the current state of the world? If they change over time, how will we find out?: Nothing in particular
What was existing testing like? What have you done to improve it?: Very hard to add testing for this specific condition
If this PR contains complex concurrent or asynchronous code, is it correct? The onus is on the PR writer to demonstrate this.: N/A
If this PR involves acquiring locks or other shared resources, how do we ensure that these are always released?: N/A
Execution
How would I tell this PR works in production? (Metrics, logs, etc.): We don't see this class of violations again
Has the safety of all log arguments been decided correctly?: N/A
Will this change significantly affect our spending on metrics or logs?: No
How would I tell that this PR does not work in production? (monitors, etc.): We still see this kind of violation and can show it relates to this
If this PR does not work as expected, how do I fix that state? Would rollback be straightforward?: Rollback
If the above plan is more complex than “recall and rollback”, please tag the support PoC here (if it is the end of the week, tag both the current and next PoC):
Scale
Antithesis workload server change
Development Process
Where should we start reviewing?: small
If this PR is in excess of 500 lines excluding versions lock-files, why does it not make sense to split it?: No
Please tag any other people who should be aware of this PR:
@jeremyk-91
@sverma30
@raiju
What do the change types mean?
- `feature`: A new feature of the service.
- `improvement`: An incremental improvement in the functionality or operation of the service.
- `fix`: Remedies the incorrect behaviour of a component of the service in a backwards-compatible way.
- `break`: Has the potential to break consumers of this service's API, inclusive of both Palantir services
and external consumers of the service's API (e.g. customer-written software or integrations).
- `deprecation`: Advertises the intention to remove service functionality without any change to the
operation of the service itself.
- `manualTask`: Requires the possibility of manual intervention (running a script, eyeballing configuration,
performing database surgery, ...) at the time of upgrade for it to succeed.
- `migration`: A fully automatic upgrade migration task with no engineer input required.
_Note: only one type should be chosen._
How are new versions calculated?
- ❗The `break` and `manual task` changelog types will result in a major release!
- 🐛 The `fix` changelog type will result in a minor release in most cases, and a patch release version for patch branches. This behaviour is configurable in autorelease.
- ✨ All others will result in a minor version release.
Mid-workflow validation of the transaction history in the TransientRowsWorkflow is now transactional.
**Check the box to generate changelog(s)**
- [x] Generate changelog entry
General
Before this PR: The mid-workflow validation of our transaction history has a race condition, if certain dice are rolled very precisely:
After this PR:
==COMMIT_MSG== Mid-workflow validation of the transaction history in the TransientRowsWorkflow is now transactional. ==COMMIT_MSG==
Priority: P2
Concerns / possible downsides (what feedback would you like?): None in particular
Is documentation needed?: No
Compatibility
Antithesis workflow server change
Testing and Correctness
What, if any, assumptions are made about the current state of the world? If they change over time, how will we find out?: Nothing in particular
What was existing testing like? What have you done to improve it?: Very hard to add testing for this specific condition
If this PR contains complex concurrent or asynchronous code, is it correct? The onus is on the PR writer to demonstrate this.: N/A
If this PR involves acquiring locks or other shared resources, how do we ensure that these are always released?: N/A
Execution
How would I tell this PR works in production? (Metrics, logs, etc.): We don't see this class of violations again
Has the safety of all log arguments been decided correctly?: N/A
Will this change significantly affect our spending on metrics or logs?: No
How would I tell that this PR does not work in production? (monitors, etc.): We still see this kind of violation and can show it relates to this
If this PR does not work as expected, how do I fix that state? Would rollback be straightforward?: Rollback
If the above plan is more complex than “recall and rollback”, please tag the support PoC here (if it is the end of the week, tag both the current and next PoC):
Scale
Antithesis workload server change
Development Process
Where should we start reviewing?: small
If this PR is in excess of 500 lines excluding versions lock-files, why does it not make sense to split it?: No
Please tag any other people who should be aware of this PR: @jeremyk-91 @sverma30 @raiju