palantir / atlasdb

Transactional Distributed Database Layer
https://palantir.github.io/atlasdb/
Apache License 2.0
47 stars 8 forks source link

feature: Allow configurable table mapper caching #6946

Closed jeremyk-91 closed 6 months ago

jeremyk-91 commented 6 months ago

General

Before this PR: TableMappingService is used to deal with name length limits in various databases: instead of creating a database identifier that is the fully qualified table name (FQTN) which may be too long, we instead convert FQTN to a number K and store _n_K as the short name of that table.

When formulating queries we need to perform mappings between FQTN and short names. We generally cache this mapping, and for a majority of AtlasDB clients this is fine, because a table once assigned stays assigned. In the event tables are dropped, while the mapping might be stale in that it could still point to a table that used to exist, we'll still correctly get some exception from the database.

However, clients which delete and re-create tables may run into issues: although a table might have been deleted and re-created on a different service node, the table mapper on an existing service node might be completely or partially unaware of this, leading to problems as said existing service node is unable to use the new table even though it was properly created.

An important consideration to be aware of going into this PR is that performance on the non-dynamic tables code path is paramount: this covers the majority of users, and their performance must not be (significantly) affected by this change.

After this PR:

==COMMIT_MSG== Users are able to provide a predicate indicating the fully qualified table names for which the table mapping cache will be allowed to operate. This should unblock workflows for users which delete and re-create tables, who can elect to not cache said tables in the table mapper. ==COMMIT_MSG==

Priority: P1.

Concerns / possible downsides (what feedback would you like?):

Is documentation needed?: Probably not.

Compatibility

Does this PR create any API breaks (e.g. at the Java or HTTP layers) - if so, do we have compatibility?: No

Does this PR change the persisted format of any data - if so, do we have forward and backward compatibility?: No

The code in this PR may be part of a blue-green deploy. Can upgrades from previous versions safely coexist? (Consider restarts of blue or green nodes.): Yes - during the overlap period blue nodes will still be vulnerable to the race condition of deleting and recreating tables but that's fine.

Does this PR rely on statements being true about other products at a deployment - if so, do we have correct product dependencies on these products (or other ways of verifying that these statements are true)?: No.

Does this PR need a schema migration? No

Testing and Correctness

What, if any, assumptions are made about the current state of the world? If they change over time, how will we find out?: That this will suffice, but the team operating the product using this feature will be able to confirm or deny that.

What was existing testing like? What have you done to improve it?: I added a number of tests for KvTableMappingService.

If this PR contains complex concurrent or asynchronous code, is it correct? The onus is on the PR writer to demonstrate this.: N/A.

If this PR involves acquiring locks or other shared resources, how do we ensure that these are always released?: N/A.

Execution

How would I tell this PR works in production? (Metrics, logs, etc.): Dynamic table workflows proceed smoothly

Has the safety of all log arguments been decided correctly?: No Args were added. Exception messages could have unsafe content, but are not safe-loggable anyway so will be treated as unsafe.

Will this change significantly affect our spending on metrics or logs?: No.

How would I tell that this PR does not work in production? (monitors, etc.): Dynamic table workflows still run into table-mapping race conditions.

If this PR does not work as expected, how do I fix that state? Would rollback be straightforward?: Rollback.

If the above plan is more complex than “recall and rollback”, please tag the support PoC here (if it is the end of the week, tag both the current and next PoC):

Scale

Would this PR be expected to pose a risk at scale? Think of the shopping product at our largest stack.: No

Would this PR be expected to perform a large number of database calls, and/or expensive database calls (e.g., row range scans, concurrent CAS)?: There is a row range scan for the table mapping table if users frequently invoke the reverse mapping generateMapToFullTableNames, but we don't get to avoid this I think.

Would this PR ever, with time and scale, become the wrong thing to do - and if so, how would we know that we need to do something differently?: If the team operating the product using this feature finds that this results in too large of a performance regression, we can re-evaluate.

Development Process

Where should we start reviewing?: TableMappingCacheConfiguration

If this PR is in excess of 500 lines excluding versions lock-files, why does it not make sense to split it?: N/A

Please tag any other people who should be aware of this PR: @jeremyk-91 @sverma30 @raiju

svc-autorelease commented 6 months ago

Released 0.1028.0