Before this PR:
Our previous assumptions that a bucket could just be a fine partition was invalidated by the sheer load on one of our internal products, where processing one fine partition at a time would mean we'd never keep up.
Instead, we've decided to bucket the timestamps into 10 minute windows. Each shard will have the same timestamp range for a given bucket identifier (this is not necessary for correctness, but a simplification for implementation that is currently sufficient).
This PR implements the state machine ADT that will be used to keep track of where we're at in the bucket assigning process.
In particular, we have the following states:
START - representing the fact that there are no open buckets for this bucket identifier.
OPEN - we have 0 or more open buckets for this bucket identifier, and to transition from this state, we must open all the buckets
WAITING_FOR_CLOSE - the state we transition into after OPEN. Here, we wait until we can close the bucket - once we do so, we transition into:
CLOSE_FROM_OPEN - we have 0 or more closed buckets. To transition from this state, we must close all the buckets.
IMMEDIATE_CLOSE - a short circuit from START where we can immediately create a complete bucket. This saves us a bunch of CAS requests (i.e., we don't need to go from null -> (x, -1) -> (x, y) [where x,y is the start inclusive and end exclusive timestamp of the range] when we can go from null -> (x, y).
For more information, read the internal RFCs.
After this PR:
==COMMIT_MSG==
==COMMIT_MSG==
Priority: P2
Concerns / possible downsides (what feedback would you like?):
Not with this one, tbh.
Is documentation needed?:
Compatibility
Does this PR create any API breaks (e.g. at the Java or HTTP layers) - if so, do we have compatibility?:
No
Does this PR change the persisted format of any data - if so, do we have forward and backward compatibility?:
No
The code in this PR may be part of a blue-green deploy. Can upgrades from previous versions safely coexist? (Consider restarts of blue or green nodes.):
Yes
Does this PR rely on statements being true about other products at a deployment - if so, do we have correct product dependencies on these products (or other ways of verifying that these statements are true)?:
No
Does this PR need a schema migration?
No
Testing and Correctness
What, if any, assumptions are made about the current state of the world? If they change over time, how will we find out?:
None
What was existing testing like? What have you done to improve it?:
Added tests for serde. It may appear trivial, but it did actually catch a mistake where I missed one set of serde tags!
If this PR contains complex concurrent or asynchronous code, is it correct? The onus is on the PR writer to demonstrate this.:
N/A
If this PR involves acquiring locks or other shared resources, how do we ensure that these are always released?:
N/A
Execution
How would I tell this PR works in production? (Metrics, logs, etc.):
Serde works without failure
Has the safety of all log arguments been decided correctly?:
N/A
Will this change significantly affect our spending on metrics or logs?:
N/A
How would I tell that this PR does not work in production? (monitors, etc.):
Serde fails explicitly (great), or we start deserialising into the wrong state (BAD, hence the tests)
If this PR does not work as expected, how do I fix that state? Would rollback be straightforward?:
N/A
If the above plan is more complex than “recall and rollback”, please tag the support PoC here (if it is the end of the week, tag both the current and next PoC):
N/A
Scale
Would this PR be expected to pose a risk at scale? Think of the shopping product at our largest stack.:
No
Would this PR be expected to perform a large number of database calls, and/or expensive database calls (e.g., row range scans, concurrent CAS)?:
Not here.
Would this PR ever, with time and scale, become the wrong thing to do - and if so, how would we know that we need to do something differently?:
Not this one
Development Process
Where should we start reviewing?:
BucketAssignerState
If this PR is in excess of 500 lines excluding versions lock-files, why does it not make sense to split it?:
N/A
Please tag any other people who should be aware of this PR:
@jeremyk-91
@sverma30
@raiju
What do the change types mean?
- `feature`: A new feature of the service.
- `improvement`: An incremental improvement in the functionality or operation of the service.
- `fix`: Remedies the incorrect behaviour of a component of the service in a backwards-compatible way.
- `break`: Has the potential to break consumers of this service's API, inclusive of both Palantir services
and external consumers of the service's API (e.g. customer-written software or integrations).
- `deprecation`: Advertises the intention to remove service functionality without any change to the
operation of the service itself.
- `manualTask`: Requires the possibility of manual intervention (running a script, eyeballing configuration,
performing database surgery, ...) at the time of upgrade for it to succeed.
- `migration`: A fully automatic upgrade migration task with no engineer input required.
_Note: only one type should be chosen._
How are new versions calculated?
- ❗The `break` and `manual task` changelog types will result in a major release!
- 🐛 The `fix` changelog type will result in a minor release in most cases, and a patch release version for patch branches. This behaviour is configurable in autorelease.
- ✨ All others will result in a minor version release.
General
Before this PR: Our previous assumptions that a bucket could just be a fine partition was invalidated by the sheer load on one of our internal products, where processing one fine partition at a time would mean we'd never keep up.
Instead, we've decided to bucket the timestamps into 10 minute windows. Each shard will have the same timestamp range for a given bucket identifier (this is not necessary for correctness, but a simplification for implementation that is currently sufficient).
This PR implements the state machine ADT that will be used to keep track of where we're at in the bucket assigning process.
In particular, we have the following states:
For more information, read the internal RFCs. After this PR:
==COMMIT_MSG== ==COMMIT_MSG==
Priority: P2
Concerns / possible downsides (what feedback would you like?): Not with this one, tbh. Is documentation needed?:
Compatibility
Does this PR create any API breaks (e.g. at the Java or HTTP layers) - if so, do we have compatibility?: No Does this PR change the persisted format of any data - if so, do we have forward and backward compatibility?: No The code in this PR may be part of a blue-green deploy. Can upgrades from previous versions safely coexist? (Consider restarts of blue or green nodes.): Yes Does this PR rely on statements being true about other products at a deployment - if so, do we have correct product dependencies on these products (or other ways of verifying that these statements are true)?: No Does this PR need a schema migration? No
Testing and Correctness
What, if any, assumptions are made about the current state of the world? If they change over time, how will we find out?: None What was existing testing like? What have you done to improve it?: Added tests for serde. It may appear trivial, but it did actually catch a mistake where I missed one set of serde tags! If this PR contains complex concurrent or asynchronous code, is it correct? The onus is on the PR writer to demonstrate this.: N/A If this PR involves acquiring locks or other shared resources, how do we ensure that these are always released?: N/A
Execution
How would I tell this PR works in production? (Metrics, logs, etc.): Serde works without failure Has the safety of all log arguments been decided correctly?: N/A Will this change significantly affect our spending on metrics or logs?: N/A How would I tell that this PR does not work in production? (monitors, etc.): Serde fails explicitly (great), or we start deserialising into the wrong state (BAD, hence the tests) If this PR does not work as expected, how do I fix that state? Would rollback be straightforward?: N/A If the above plan is more complex than “recall and rollback”, please tag the support PoC here (if it is the end of the week, tag both the current and next PoC): N/A
Scale
Would this PR be expected to pose a risk at scale? Think of the shopping product at our largest stack.: No Would this PR be expected to perform a large number of database calls, and/or expensive database calls (e.g., row range scans, concurrent CAS)?: Not here. Would this PR ever, with time and scale, become the wrong thing to do - and if so, how would we know that we need to do something differently?: Not this one
Development Process
Where should we start reviewing?: BucketAssignerState If this PR is in excess of 500 lines excluding versions lock-files, why does it not make sense to split it?: N/A Please tag any other people who should be aware of this PR: @jeremyk-91 @sverma30 @raiju