Closed ajwerner closed 4 years ago
Offline discussion with @spaskob to sidestep 2. above by instead having the Registry.Run()
call force the registry to adopt specific jobs rather than any jobs. For 3. then we can have schema change jobs which are aware of subsequent jobs in the mutations queue launch a goroutine to call Registry.Run()
explicitly or something like that.
We should do as just not writing jobs in cases where we use sqlbase.InvalidMutationID
also seems good
This is a simple way to reproduce the slowness:
cockroach sql --insecure --watch 1s -e 'drop table if exists users cascade; create table users (id uuid not null, name varchar(255) not null, email varchar(255) not null, password varchar(255) not null, remember_token varchar(100) null, created_at timestamp(0) without time zone null, updated_at timestamp(0) without time zone null, deleted_at timestamp(0) without time zone null); alter table users add primary key (id); alter table users add constraint users_email_unique unique (email);'
It turns there's a short term fix that seems to work pretty well. The reason for the slowness is that if the schema change is not first in line in the table mutation queue it would return a re-triable error and the jobs framework will re-adopt and run it later. The problem is that the job adoption loop is 30s. This is now fixed via #48608.
I will still work on the job improvements to prevent future regressions and simplify the jobs framework. This PR https://github.com/cockroachdb/cockroach/pull/48600 is an example of what is coming next.
Points 2 and 3 were subsumed by #48621.
Points 4 and 5 are less high priority.
Hi, schema change has improve speed, but there still an issue
CPU usage is still extremely high (like 90%-100% of an 4 cores machine) when schema change
Hi, schema change has improve speed, but there still an issue
CPU usage is still extremely high (like 90%-100% of an 4 cores machine) when schema change
How much data are in the tables and how many schema changes are you running? If you’d be willing to provide any sort of scripts to reproduce or cpu profiles, that would be very helpful
We are also experiencing extremely high CPU usage (>90%) as well as bandwidth usage in our cockroach cloud 3-node production cluster (ID 36691cbd-9927-438f-83af-cdc3c06a2b20). Is this regression truly resolved?
Describe the problem
We've had a number of reports of schema changes being slow lately (##38111, #47607, #47512). This is a meta-issue for a problem which is theorized here and to track reproduction and evaluation. It also includes proposed steps.
Theory
We’ve exposed the job adoption cycle to some schema changes even if they’d happen super fast.
Alternative Consideration
We scan the entire jobs table in the job adoption loop, as this table gets big, that's probably slow.
Proposals
[x] 1. Understand the exact implications with regards to user-visible latency for schema changes
We should write some tests which are hella slow and fix that
[ ] 2. Prioritize the jobs we adopt in the adoption loop
We really want to adopt just about anything over top of GC Jobs
We really don't want to adopt jobs that have unmet mutation dependencies
We could create a simple ranking mechanism where we prioritize 1) Type - Do GC jobs last maybe 2) Key - choose uniformly randomly over the jobs with the same key 3) Value - choose in value order for jobs with the same key
I'm not exactly sold on the type prioritization but at least it's easy. For the others, we could inject a function per type that controls its ranking.
It doesn't need to be very tightly coupled.
[ ] 3. Start jobs immediately after finishing a job.
Right now we wait for the adoption loop before picking up another job. If you have a number of queued mutations, it may take a long time for the subsequent jobs to get picked up.
Really easy
[ ] 4. Stop creating jobs that just wait.
Right now we create jobs that make no sense as jobs.
There is no value if another node adopts the job except for coordinating the waiting.
These excess jobs probably exacerbate the above problems.
[ ] 5. Create an index on the status column of jobs.
Junk piles up and then the query gets very slow.
Adding an index should be an easy migration.
Environment: