Closed ajwerner closed 3 years ago
The success criteria here is that we reduce the RTTs in the rttanalysis
benchmark. These tests track the number of round-trips to run certain queries: https://github.com/cockroachdb/cockroach/blob/master/pkg/bench/rttanalysis/testdata/benchmark_expectations.
You can rewrite the expectations using make test PKG=./pkg/bench/rttanalysis TESTS=TestBenchmarkExpectation FLAGS='--rewrite'
. The idea is that we should be able to make the Grant tests go from 3 round trips per additional table to 2. Getting to zero will require batching the retrieval and writing of descriptors (https://github.com/cockroachdb/cockroach/issues/64388).
18,Grant/grant_all_on_1_table
21,Grant/grant_all_on_2_tables
24,Grant/grant_all_on_3_tables
The sketch of the work is:
(*jobs.Registry).CreateJobsWithTxn
to create job records (and their associated IDs) in a single batch.
Record
and JobID
pair.(*sql.planner).createOrUpdateSchemaChangeJob
to deal with Record
/JobID
pairs instead of Job
structs.(*sql.connExecutor).CommitSQLTransaction
.The new numbers are:
18,Grant/grant_all_on_1_table
20,Grant/grant_all_on_2_tables
22,Grant/grant_all_on_3_tables
Thanks for clearly sketching out the work.
Is your feature request related to a problem? Please describe.
Create an API to create a batch of jobs. All existing APIs only allow creating a single job at a time.
Describe the solution you'd like
(*jobs.Registry).CreateJobsWithTxn
Additional context
This is one of the remaining work items to fix #41930.
The relevant context is that today we create or update jobs eagerly during schema change operations in https://github.com/cockroachdb/cockroach/blob/67099fadb9a1d069abaf84004076bc32aa9f00fd/pkg/sql/table.go#L104 and similar.
Rather than creating these jobs eagerly, we should just store the intended job record in the
extraTxnState
and write it in one go before committing. This will be a major performance optimization for the existing code path and will be easy to adopt in the new schema changer.Epic: CRDB-8577