palantir / atlasdb

Transactional Distributed Database Layer
https://palantir.github.io/atlasdb/
Apache License 2.0
44 stars 7 forks source link

[PDS-543497] Permit 2 DC setups where one of the DCs has less than expected replication #7133

Closed jeremyk-91 closed 4 weeks ago

jeremyk-91 commented 4 weeks ago

General

Before this PR: Atlas fails its Cassandra topology checks if replication in any of the advertised datacenters does not match. This includes a legitimate situation that we encounter in our internal implementation of DC migrations, where we increase/decrease the RFs of the new/old cluster incrementally, rather than all at once.

After this PR: ==COMMIT_MSG== Atlas now permits situations where in a two-DC set-up we have two clusters, where one matches the expected RF and one has an RF smaller than the expected RF. ==COMMIT_MSG==

Priority: High P2, blocks migrations

Concerns / possible downsides (what feedback would you like?):

Is documentation needed?: I don't think so.

Compatibility

Does this PR create any API breaks (e.g. at the Java or HTTP layers) - if so, do we have compatibility?: No

Does this PR change the persisted format of any data - if so, do we have forward and backward compatibility?: No

The code in this PR may be part of a blue-green deploy. Can upgrades from previous versions safely coexist? (Consider restarts of blue or green nodes.): Yes, old versions just will still flag this intermediate state as hazardous

Does this PR rely on statements being true about other products at a deployment - if so, do we have correct product dependencies on these products (or other ways of verifying that these statements are true)?: Cassandra DC migrations stick to the current procedure.

Does this PR need a schema migration? No

Testing and Correctness

What, if any, assumptions are made about the current state of the world? If they change over time, how will we find out?: That we generally only run our Cassandra clusters with one DC.

What was existing testing like? What have you done to improve it?: Added new tests for the new (unfortunately more complex logic)

If this PR contains complex concurrent or asynchronous code, is it correct? The onus is on the PR writer to demonstrate this.: It doesn't

If this PR involves acquiring locks or other shared resources, how do we ensure that these are always released?: It doesn't

Execution

How would I tell this PR works in production? (Metrics, logs, etc.): We might see a new log line in this state.

Has the safety of all log arguments been decided correctly?: Yeah, keyspace comes from config so is safe.

Will this change significantly affect our spending on metrics or logs?: I don't think so.

How would I tell that this PR does not work in production? (monitors, etc.): We still have failures in this 3-and-1 state.

If this PR does not work as expected, how do I fix that state? Would rollback be straightforward?: Rollback

If the above plan is more complex than “recall and rollback”, please tag the support PoC here (if it is the end of the week, tag both the current and next PoC):

Scale

Would this PR be expected to pose a risk at scale? Think of the shopping product at our largest stack.: No.

Would this PR be expected to perform a large number of database calls, and/or expensive database calls (e.g., row range scans, concurrent CAS)?: No

Would this PR ever, with time and scale, become the wrong thing to do - and if so, how would we know that we need to do something differently?: Yes, if we change the migration process. I imagine tests of the new migration process would flush this out.

Development Process

Where should we start reviewing?: CassandraVerifier. Most of this is tests, honestly.

If this PR is in excess of 500 lines excluding versions lock-files, why does it not make sense to split it?:

Please tag any other people who should be aware of this PR: @jeremyk-91 @sverma30 @raiju

changelog-app[bot] commented 4 weeks ago

Generate changelog in changelog-dir>`changelog/@unreleased`</changelog-dir

What do the change types mean? - `feature`: A new feature of the service. - `improvement`: An incremental improvement in the functionality or operation of the service. - `fix`: Remedies the incorrect behaviour of a component of the service in a backwards-compatible way. - `break`: Has the potential to break consumers of this service's API, inclusive of both Palantir services and external consumers of the service's API (e.g. customer-written software or integrations). - `deprecation`: Advertises the intention to remove service functionality without any change to the operation of the service itself. - `manualTask`: Requires the possibility of manual intervention (running a script, eyeballing configuration, performing database surgery, ...) at the time of upgrade for it to succeed. - `migration`: A fully automatic upgrade migration task with no engineer input required. _Note: only one type should be chosen._
How are new versions calculated? - ❗The `break` and `manual task` changelog types will result in a major release! - 🐛 The `fix` changelog type will result in a minor release in most cases, and a patch release version for patch branches. This behaviour is configurable in autorelease. - ✨ All others will result in a minor version release.

Type

- [ ] Feature - [ ] Improvement - [x] Fix - [ ] Break - [ ] Deprecation - [ ] Manual task - [ ] Migration

Description

AtlasDB now permits situations where in a two-DC set-up we have two clusters, where one matches the expected RF and one has an RF smaller than the expected RF. This is a normal intermediate state of the cluster during DC migrations. **Check the box to generate changelog(s)** - [x] Generate changelog entry
svc-autorelease commented 4 weeks ago

Released 0.1095.0