cockroachdb / cockroach

CockroachDB — the cloud native, distributed SQL database designed for high availability, effortless scale, and control over data placement.
https://www.cockroachlabs.com
Other
30.13k stars 3.81k forks source link

sql/tests: TestErrorDuringExtendedProtocolCommit failed #103418

Closed cockroach-teamcity closed 1 year ago

cockroach-teamcity commented 1 year ago

sql/tests.TestErrorDuringExtendedProtocolCommit failed with artifacts on release-23.1.0 @ 358e0d87912365b8976c55ab9b3292e999cf720d:

I230516 14:33:14.887984 7658 upgrade/upgrademanager/manager.go:742  [T1,n1] 734  running Upgrade to 22.2-98: "change TTL for SQL Stats system tables"
I230516 14:33:14.888187 9469 jobs/adopt.go:261  [T1,n1] 735  job 865655832107745281: resuming execution
I230516 14:33:14.902206 9478 jobs/registry.go:1584  [T1,n1] 736  MIGRATION job 865655832107745281: stepping through state running
I230516 14:33:14.914522 9447 1@util/log/event_log.go:32  [T1,n1,job=MIGRATION id=865655832107745281,upgrade=22.2-98,intExec=set-SQLStatsTables-TTL] 737 ={"Timestamp":1684247594910598000,"EventType":"set_zone_config","Statement":"ALTER TABLE \"\".system.statement_statistics CONFIGURE ZONE USING \"gc.ttlseconds\" = $1","Tag":"CONFIGURE ZONE","User":"root","DescriptorID":42,"ApplicationName":"$ internal-set-SQLStatsTables-TTL","PlaceholderValues":["3600"],"Target":"TABLE system.public.statement_statistics","Options":["\"gc.ttlseconds\" = 3600"]}
I230516 14:33:14.918282 9456 1@util/log/event_log.go:32  [T1,n1,job=MIGRATION id=865655832107745281,upgrade=22.2-98,intExec=set-SQLStatsTables-TTL] 738 ={"Timestamp":1684247594914818665,"EventType":"set_zone_config","Statement":"ALTER TABLE \"\".system.transaction_statistics CONFIGURE ZONE USING \"gc.ttlseconds\" = $1","Tag":"CONFIGURE ZONE","User":"root","DescriptorID":43,"ApplicationName":"$ internal-set-SQLStatsTables-TTL","PlaceholderValues":["3600"],"Target":"TABLE system.public.transaction_statistics","Options":["\"gc.ttlseconds\" = 3600"]}
I230516 14:33:14.921649 9512 1@util/log/event_log.go:32  [T1,n1,job=MIGRATION id=865655832107745281,upgrade=22.2-98,intExec=set-SQLStatsTables-TTL] 739 ={"Timestamp":1684247594918545172,"EventType":"set_zone_config","Statement":"ALTER TABLE \"\".system.statement_activity CONFIGURE ZONE USING \"gc.ttlseconds\" = $1","Tag":"CONFIGURE ZONE","User":"root","DescriptorID":60,"ApplicationName":"$ internal-set-SQLStatsTables-TTL","PlaceholderValues":["3600"],"Target":"TABLE system.public.statement_activity","Options":["\"gc.ttlseconds\" = 3600"]}
I230516 14:33:14.924592 9499 1@util/log/event_log.go:32  [T1,n1,job=MIGRATION id=865655832107745281,upgrade=22.2-98,intExec=set-SQLStatsTables-TTL] 740 ={"Timestamp":1684247594921954433,"EventType":"set_zone_config","Statement":"ALTER TABLE \"\".system.transaction_activity CONFIGURE ZONE USING \"gc.ttlseconds\" = $1","Tag":"CONFIGURE ZONE","User":"root","DescriptorID":61,"ApplicationName":"$ internal-set-SQLStatsTables-TTL","PlaceholderValues":["3600"],"Target":"TABLE system.public.transaction_activity","Options":["\"gc.ttlseconds\" = 3600"]}
I230516 14:33:14.929309 9478 jobs/registry.go:1584  [T1,n1] 741  MIGRATION job 865655832107745281: stepping through state succeeded
I230516 14:33:14.948558 7658 jobs/wait.go:145  [T1,n1] 742  waited for 1 [865655832107745281] queued jobs to complete 60.388972ms
I230516 14:33:14.948757 7658 upgrade/upgrademanager/manager.go:283  [T1,n1] 743  running permanent upgrade for version 22.2-102
I230516 14:33:14.972048 7658 jobs/registry.go:616  [T1,n1] 744  active version is 23.1
I230516 14:33:14.983465 9594 2@rpc/context.go:2565  [T1,n1,rnode=1,raddr=127.0.0.1:44787,class=default,rpc] 745  connection is now ready
I230516 14:33:14.988970 7658 upgrade/upgrademanager/manager.go:742  [T1,n1] 746  running Upgrade to 22.2-102: "create sql activity updater job"
I230516 14:33:14.989032 9583 jobs/adopt.go:261  [T1,n1] 747  job 865655832408162305: resuming execution
I230516 14:33:15.002971 9545 jobs/registry.go:1584  [T1,n1] 748  MIGRATION job 865655832408162305: stepping through state running
I230516 14:33:15.019490 9545 jobs/registry.go:1584  [T1,n1] 749  MIGRATION job 865655832408162305: stepping through state succeeded
I230516 14:33:15.042741 7658 jobs/wait.go:145  [T1,n1] 750  waited for 1 [865655832408162305] queued jobs to complete 53.724368ms
I230516 14:33:15.042793 7658 server/server_sql.go:1615  [T1,n1] 751  done ensuring all necessary startup migrations have run
I230516 14:33:15.043300 9717 kv/kvclient/rangefeed/rangefeedcache/watcher.go:335  [T1,n1] 752  tenant-settings-watcher: established range feed cache
I230516 14:33:15.046089 9695 jobs/job_scheduler.go:407  [T1,n1] 753  waiting 3m0s before scheduled jobs daemon start
I230516 14:33:15.047951 9719 kv/kvclient/rangefeed/rangefeedcache/watcher.go:335  [T1,n1] 754  tenant-capability-watcher: established range feed cache
I230516 14:33:15.047970 9748 13@kv/kvserver/replicate_queue.go:839  [T1,n1,replicate,s1,r58/1:/Table/5{7-8}] 755  error processing replica: 0 of 1 live stores are able to take a new replica for the range (1 already has a voter, 0 already have a non-voter); likely not enough nodes in cluster
I230516 14:33:15.048808 9704 13@kv/kvserver/replicate_queue.go:839  [T1,n1,replicate,s1,r12/1:/Table/{8-11}] 756  error processing replica: 0 of 1 live stores are able to take a new replica for the range (1 already has a voter, 0 already have a non-voter); likely not enough nodes in cluster
I230516 14:33:15.049034 9719 multitenant/tenantcapabilities/tenantcapabilitieswatcher/watcher.go:149  [T1,n1] 757  received results of a full table scan for tenant capabilities
I230516 14:33:15.049303 9715 server/auto_upgrade.go:78  [T1,n1] 758  no need to upgrade, cluster already at the newest version
I230516 14:33:15.055884 9752 13@kv/kvserver/replicate_queue.go:839  [T1,n1,replicate,s1,r52/1:/Table/5{1-2}] 759  error processing replica: 0 of 1 live stores are able to take a new replica for the range (1 already has a voter, 0 already have a non-voter); likely not enough nodes in cluster
I230516 14:33:15.057346 9697 sql/syntheticprivilegecache/cache.go:199  [T1,n1] 760  warmed privileges for virtual tables in 11.054109ms
I230516 14:33:15.078450 9762 13@kv/kvserver/replicate_queue.go:839  [T1,n1,replicate,s1,r4/1:/System{/tsd-tse}] 761  error processing replica: 0 of 1 live stores are able to take a new replica for the range (1 already has a voter, 0 already have a non-voter); likely not enough nodes in cluster
I230516 14:33:15.093990 7882 3@pebble/event.go:734  [n1,s1,pebble] 762  [JOB 7] WAL created 000006
I230516 14:33:15.096132 7658 1@server/server_sql.go:1733  [T1,n1] 763  serving sql connections
I230516 14:33:15.097831 7658 testutils/testcluster/testcluster.go:1357  [-] 764  WaitForFullReplication
I230516 14:33:15.097869 7658 testutils/testcluster/testcluster.go:1361  [-] 765  WaitForFullReplication took: 370ns
    autocommit_extended_protocol_test.go:197: 
            Error Trace:    github.com/cockroachdb/cockroach/pkg/sql/tests/autocommit_extended_protocol_test.go:197
            Error:          An error is expected but got nil.
I230516 14:33:15.110872 7658 testutils/testcluster/testcluster.go:145  [-] 766  TestCluster quiescing nodes
            Test:           TestErrorDuringExtendedProtocolCommit
I230516 14:33:15.111025 8209 sql/stats/automatic_stats.go:550  [T1,n1] 767  quiescing auto stats refresher
W230516 14:33:15.111284 9006 kv/kvserver/intentresolver/intent_resolver.go:826  [-] 768  failed to gc transaction record: could not GC completed transaction anchored at /Table/3/1/102/2/1: node unavailable; try another peer
I230516 14:33:15.111294 7849 server/start_listen.go:103  [T1,n1] 769  server shutting down: instructing cmux to stop accepting
I230516 14:33:15.111316 9594 rpc/context.go:2302  [T1,n1,rnode=1,raddr=127.0.0.1:44787,class=default,rpc] 770  connection heartbeat loop ended with err: <nil>
W230516 14:33:15.111732 8190 sql/sqlliveness/slinstance/slinstance.go:334  [T1,n1] 771  exiting heartbeat loop
W230516 14:33:15.111760 8190 sql/sqlliveness/slinstance/slinstance.go:321  [T1,n1] 772  exiting heartbeat loop with error: node unavailable; try another peer
E230516 14:33:15.111784 8190 server/server_sql.go:507  [T1,n1] 773  failed to run update of instance with new session ID: node unavailable; try another peer
I230516 14:33:15.111784 8347 jobs/registry.go:1584  [T1,n1] 774  AUTO SPAN CONFIG RECONCILIATION job 865655829350580225: stepping through state succeeded
E230516 14:33:15.111813 9405 jobs/registry.go:989  [T1,n1] 775  error getting live session: node unavailable; try another peer
I230516 14:33:15.116760 7658 server/server_controller_orchestration.go:263  [T1,n1] 776  server controller shutting down ungracefully
I230516 14:33:15.116804 7658 server/server_controller_orchestration.go:274  [T1,n1] 777  waiting for tenant servers to report stopped
W230516 14:33:15.116877 7658 server/server_sql.go:1693  [T1,n1] 778  server shutdown without a prior graceful drain
--- FAIL: TestErrorDuringExtendedProtocolCommit (1.23s)

Parameters: TAGS=bazel,gss

Help

See also: [How To Investigate a Go Test Failure \(internal\)](https://cockroachlabs.atlassian.net/l/c/HgfXfJgM)

Same failure on other branches

- #103122 sql/tests: TestErrorDuringExtendedProtocolCommit failed [C-test-failure O-robot T-sql-foundations T-sql-queries branch-release-22.1] - #101673 sql/tests: TestErrorDuringExtendedProtocolCommit failed [C-test-failure O-robot T-sql-foundations todo-deprecate.branch-release-23.1.0]

/cc @cockroachdb/sql-queries

This test on roachdash | Improve this report!

Jira issue: CRDB-27988

msirek commented 1 year ago

@cockroachdb/sql-foundations Is this something your team would like to look at?

Full log for test:

=== RUN   TestErrorDuringExtendedProtocolCommit
I230516 14:33:13.904557 7658 3@pebble/event.go:706  [n?,s?,pebble] 551  [JOB 1] MANIFEST created 000001
I230516 14:33:13.904779 7658 3@pebble/event.go:734  [n?,s?,pebble] 552  [JOB 1] WAL created 000002
I230516 14:33:13.904845 7658 3@pebble/event.go:702  [n?,s?,pebble] 553  upgraded to format version: 002
I230516 14:33:13.904872 7658 3@pebble/event.go:702  [n?,s?,pebble] 554  upgraded to format version: 003
I230516 14:33:13.904887 7658 3@pebble/event.go:702  [n?,s?,pebble] 555  upgraded to format version: 004
I230516 14:33:13.904901 7658 3@pebble/event.go:702  [n?,s?,pebble] 556  upgraded to format version: 005
I230516 14:33:13.904920 7658 3@pebble/event.go:702  [n?,s?,pebble] 557  upgraded to format version: 006
I230516 14:33:13.904935 7658 3@pebble/event.go:702  [n?,s?,pebble] 558  upgraded to format version: 007
I230516 14:33:13.904951 7658 3@pebble/event.go:702  [n?,s?,pebble] 559  upgraded to format version: 008
I230516 14:33:13.904966 7658 3@pebble/event.go:702  [n?,s?,pebble] 560  upgraded to format version: 009
I230516 14:33:13.904982 7658 3@pebble/event.go:702  [n?,s?,pebble] 561  upgraded to format version: 010
I230516 14:33:13.905079 7658 3@pebble/event.go:702  [n?,s?,pebble] 562  upgraded to format version: 011
I230516 14:33:13.905102 7658 3@pebble/event.go:702  [n?,s?,pebble] 563  upgraded to format version: 012
I230516 14:33:13.905115 7658 3@pebble/event.go:702  [n?,s?,pebble] 564  upgraded to format version: 013
I230516 14:33:13.905153 7658 server/config.go:857  [T1,n?] 565  1 storage engine initialized
I230516 14:33:13.905173 7658 server/config.go:860  [T1,n?] 566  Pebble cache size: 128 MiB
I230516 14:33:13.905189 7658 server/config.go:860  [T1,n?] 567  store 0: in-memory, size 512 MiB
I230516 14:33:13.905211 7658 server/config.go:860  [T1,n?] 568  store 0: {Encrypted:false ReadOnly:false FileStoreProperties:<nil>}
I230516 14:33:13.905176 7760 3@pebble/event.go:726  [n?,s?,pebble] 569  [JOB 4] all initial table stats loaded
I230516 14:33:13.908621 7658 1@rpc/tls.go:231  [T1,n?] 570  server certificate addresses: IP=127.0.0.1,::1; DNS=localhost,*.local; CN=node
I230516 14:33:13.908682 7658 1@rpc/tls.go:280  [T1,n?] 571  web UI certificate addresses: IP=127.0.0.1,::1; DNS=localhost,*.local; CN=node
W230516 14:33:13.908736 7658 server/status/runtime.go:343  [T1,n?] 572  could not parse build timestamp: parsing time "" as "2006/01/02 15:04:05": cannot parse "" as "2006"
I230516 14:33:13.913013 7658 3@pebble/event.go:706  [n?,s?,pebble] 573  [JOB 1] MANIFEST created 000001
I230516 14:33:13.913211 7658 3@pebble/event.go:734  [n?,s?,pebble] 574  [JOB 1] WAL created 000002
I230516 14:33:13.913281 7658 3@pebble/event.go:702  [n?,s?,pebble] 575  upgraded to format version: 002
I230516 14:33:13.913310 7658 3@pebble/event.go:702  [n?,s?,pebble] 576  upgraded to format version: 003
I230516 14:33:13.913333 7658 3@pebble/event.go:702  [n?,s?,pebble] 577  upgraded to format version: 004
I230516 14:33:13.913353 7658 3@pebble/event.go:702  [n?,s?,pebble] 578  upgraded to format version: 005
I230516 14:33:13.913371 7658 3@pebble/event.go:702  [n?,s?,pebble] 579  upgraded to format version: 006
I230516 14:33:13.913386 7658 3@pebble/event.go:702  [n?,s?,pebble] 580  upgraded to format version: 007
I230516 14:33:13.913401 7658 3@pebble/event.go:702  [n?,s?,pebble] 581  upgraded to format version: 008
I230516 14:33:13.913417 7658 3@pebble/event.go:702  [n?,s?,pebble] 582  upgraded to format version: 009
I230516 14:33:13.913433 7658 3@pebble/event.go:702  [n?,s?,pebble] 583  upgraded to format version: 010
I230516 14:33:13.913548 7658 3@pebble/event.go:702  [n?,s?,pebble] 584  upgraded to format version: 011
I230516 14:33:13.913576 7658 3@pebble/event.go:702  [n?,s?,pebble] 585  upgraded to format version: 012
I230516 14:33:13.913595 7658 3@pebble/event.go:702  [n?,s?,pebble] 586  upgraded to format version: 013
I230516 14:33:13.914555 7793 3@pebble/event.go:726  [n?,stemp,pebble] 587  [JOB 4] all initial table stats loaded
I230516 14:33:13.960021 7658 1@server/clock_monotonicity.go:65  [T1,n?] 588  monitoring forward clock jumps based on server.clock.forward_jump_check_enabled
I230516 14:33:13.969202 7658 server/init.go:200  [T1,n?] 589  no stores initialized
I230516 14:33:13.969249 7658 server/init.go:201  [T1,n?] 590  awaiting `cockroach init` or join with an already initialized node
I230516 14:33:13.969271 7658 server/init.go:265  [T1,n?] 591  cluster ca4b3b03-209d-42e4-94d1-572d803d45ad has been created
I230516 14:33:13.969288 7658 server/init.go:266  [T1,n?] 592  allocated node ID: n1 (for self)
I230516 14:33:13.969300 7658 server/init.go:267  [T1,n?] 593  active cluster version: 23.1
W230516 14:33:13.969554 7658 1@gossip/gossip.go:1404  [T1,n1] 594  no addresses found; use --join to specify a connected node
I230516 14:33:13.969551 7850 1@server/server.go:1652  [T1,n1] 595  connecting to gossip network to verify cluster ID "ca4b3b03-209d-42e4-94d1-572d803d45ad"
I230516 14:33:13.969595 7658 gossip/gossip.go:368  [T1,n1] 596  NodeDescriptor set to node_id:1 address:<network_field:"tcp" address_field:"127.0.0.1:44787" > attrs:<> locality:<tiers:<key:"region" value:"test" > tiers:<key:"dc" value:"dc1" > > ServerVersion:<major_val:23 minor_val:1 patch:0 internal:0 > build_tag:"v23.1.0-dev" started_at:1684247593969594616 cluster_name:"" sql_address:<network_field:"tcp" address_field:"127.0.0.1:41181" > http_address:<network_field:"tcp" address_field:"127.0.0.1:44913" >
I230516 14:33:13.974883 7658 server/node.go:517  [T1,n1] 597  initialized store s1
I230516 14:33:13.974946 7658 kv/kvserver/stores.go:264  [T1,n1] 598  read 0 node addresses from persistent storage
I230516 14:33:13.975001 7658 server/node.go:611  [T1,n1] 599  started with engine type 2
I230516 14:33:13.975016 7658 server/node.go:613  [T1,n1] 600  started with attributes []
I230516 14:33:13.975143 7658 1@server/server.go:1815  [T1,n1] 601  starting https server at 127.0.0.1:44913 (use: 127.0.0.1:44913)
I230516 14:33:13.975174 7658 1@server/server.go:1820  [T1,n1] 602  starting postgres server at 127.0.0.1:41181 (use: 127.0.0.1:41181)
I230516 14:33:13.975193 7658 1@server/server.go:1823  [T1,n1] 603  starting grpc server at 127.0.0.1:44787
I230516 14:33:13.975209 7658 1@server/server.go:1824  [T1,n1] 604  advertising CockroachDB node at 127.0.0.1:44787
I230516 14:33:13.975695 7850 1@server/server.go:1655  [T1,n1] 605  node connected via gossip
I230516 14:33:14.002075 7927 3@pebble/event.go:734  [n1,s1,pebble] 606  [JOB 5] WAL created 000004
I230516 14:33:14.007705 7838 kv/kvclient/rangefeed/rangefeedcache/watcher.go:335  [T1,n1] 607  spanconfig-subscriber: established range feed cache
I230516 14:33:14.008407 7658 1@util/log/event_log.go:32  [T1,n1] 608 ={"Timestamp":1684247594008403036,"EventType":"node_join","NodeID":1,"StartedAt":1684247593969594616,"LastUp":1684247593969594616}
I230516 14:33:14.008575 7841 kv/kvclient/rangefeed/rangefeedcache/watcher.go:335  [T1,n1] 609  settings-watcher: established range feed cache
I230516 14:33:14.009656 7989 13@kv/kvserver/replicate_queue.go:839  [T1,n1,replicate,s1,r49/1:/Table/4{7-8}] 610  error processing replica: 0 of 1 live stores are able to take a new replica for the range (1 already has a voter, 0 already have a non-voter); likely not enough nodes in cluster
I230516 14:33:14.009834 7916 kv/kvserver/queue.go:614  [T1,n1,s1,r10/1:/Table/{6-7},raft] 611  rate limited in MaybeAdd (merge): throttled on async limiting semaphore
I230516 14:33:14.009847 7990 13@kv/kvserver/replicate_queue.go:839  [T1,n1,replicate,s1,r49/1:/Table/4{7-8}] 612  error processing replica: 0 of 1 live stores are able to take a new replica for the range (1 already has a voter, 0 already have a non-voter); likely not enough nodes in cluster
I230516 14:33:14.010428 8175 13@kv/kvserver/replicate_queue.go:839  [T1,n1,replicate,s1,r10/1:/Table/{6-7}] 613  error processing replica: 0 of 1 live stores are able to take a new replica for the range (1 already has a voter, 0 already have a non-voter); likely not enough nodes in cluster
I230516 14:33:14.011059 7658 sql/sqlliveness/slinstance/slinstance.go:434  [T1,n1] 614  starting SQL liveness instance
I230516 14:33:14.011179 8058 13@kv/kvserver/replicate_queue.go:839  [T1,n1,replicate,s1,r11/1:/Table/{7-8}] 615  error processing replica: 0 of 1 live stores are able to take a new replica for the range (1 already has a voter, 0 already have a non-voter); likely not enough nodes in cluster
I230516 14:33:14.012295 8125 13@kv/kvserver/replicate_queue.go:839  [T1,n1,replicate,s1,r41/1:/Table/{39-40}] 616  error processing replica: 0 of 1 live stores are able to take a new replica for the range (1 already has a voter, 0 already have a non-voter); likely not enough nodes in cluster
I230516 14:33:14.012571 8190 sql/sqlliveness/slstorage/slstorage.go:540  [T1,n1] 617  inserted sqlliveness session 010180ff290dfb8c68402d8de329c5441a53eb
I230516 14:33:14.012613 8190 sql/sqlliveness/slinstance/slinstance.go:258  [T1,n1] 618  created new SQL liveness session 010180ff290dfb8c68402d8de329c5441a53eb
I230516 14:33:14.012639 7658 sql/sqlinstance/instancestorage/instancestorage.go:342  [T1,n1] 619  assigning instance id to rpc addr 127.0.0.1:44787 and sql addr 127.0.0.1:41181
I230516 14:33:14.013242 8195 13@kv/kvserver/replicate_queue.go:839  [T1,n1,replicate,s1,r48/1:/Table/4{6-7}] 620  error processing replica: 0 of 1 live stores are able to take a new replica for the range (1 already has a voter, 0 already have a non-voter); likely not enough nodes in cluster
I230516 14:33:14.013470 7658 server/server_sql.go:1507  [T1,n1] 621  bound sqlinstance: Instance{RegionPrefix: gA==, InstanceID: 1, SQLAddr: 127.0.0.1:41181, RPCAddr: 127.0.0.1:44787, SessionID: 010180ff290dfb8c68402d8de329c5441a53eb, Locality: region=test,dc=dc1, BinaryVersion: 23.1}
I230516 14:33:14.013733 7658 upgrade/upgrademanager/manager.go:170  [T1,n1] 622  running permanent upgrades up to version: 23.1
I230516 14:33:14.014503 8227 13@kv/kvserver/replicate_queue.go:839  [T1,n1,replicate,s1,r14/1:/Table/1{2-3}] 623  error processing replica: 0 of 1 live stores are able to take a new replica for the range (1 already has a voter, 0 already have a non-voter); likely not enough nodes in cluster
I230516 14:33:14.014568 8200 sql/temporary_schema.go:505  [T1,n1] 624  running temporary object cleanup background job
I230516 14:33:14.014650 8201 sql/sqlstats/persistedsqlstats/provider.go:167  [T1,n1] 625  starting sql-stats-worker with initial delay: 10m22.756491135s
I230516 14:33:14.016143 8219 kv/kvclient/rangefeed/rangefeedcache/watcher.go:335  [T1,n1] 626  system-config-cache: established range feed cache
I230516 14:33:14.017786 8275 kv/kvserver/replica_rangefeed.go:696  [T1,n1,rangefeed=table-stats-cache,dest_n=1,dest_s=1,dest_r=22,s1,r22/1:/Table/2{0-1}] 627  RangeFeed closed timestamp is empty
I230516 14:33:14.018698 8253 13@kv/kvserver/replicate_queue.go:839  [T1,n1,replicate,s1,r39/1:/Table/3{7-8}] 628  error processing replica: 0 of 1 live stores are able to take a new replica for the range (1 already has a voter, 0 already have a non-voter); likely not enough nodes in cluster
I230516 14:33:14.020427 8041 13@kv/kvserver/replicate_queue.go:839  [T1,n1,replicate,s1,r32/1:/NamespaceTable/{30-Max}] 629  error processing replica: 0 of 1 live stores are able to take a new replica for the range (1 already has a voter, 0 already have a non-voter); likely not enough nodes in cluster
I230516 14:33:14.024557 8255 13@kv/kvserver/replicate_queue.go:839  [T1,n1,replicate,s1,r17/1:/Table/1{5-6}] 630  error processing replica: 0 of 1 live stores are able to take a new replica for the range (1 already has a voter, 0 already have a non-voter); likely not enough nodes in cluster
I230516 14:33:14.024778 8292 13@kv/kvserver/replicate_queue.go:839  [T1,n1,replicate,s1,r37/1:/Table/3{5-6}] 631  error processing replica: 0 of 1 live stores are able to take a new replica for the range (1 already has a voter, 0 already have a non-voter); likely not enough nodes in cluster
I230516 14:33:14.025843 8218 jobs/registry.go:616  [T1,n1] 632  active version is 23.1
I230516 14:33:14.026308 8200 sql/temporary_schema.go:555  [T1,n1] 633  found 0 temporary schemas
I230516 14:33:14.026342 8200 sql/temporary_schema.go:558  [T1,n1] 634  early exiting temporary schema cleaner as no temporary schemas were found
I230516 14:33:14.026353 8200 sql/temporary_schema.go:559  [T1,n1] 635  completed temporary object cleanup job
I230516 14:33:14.026363 8200 sql/temporary_schema.go:640  [T1,n1] 636  temporary object cleaner next scheduled to run at 2023-05-16 15:03:14.014534701 +0000 UTC m=+1805.351777920
I230516 14:33:14.027848 8356 13@kv/kvserver/replicate_queue.go:839  [T1,n1,replicate,s1,r42/1:/Table/4{0-1}] 637  error processing replica: 0 of 1 live stores are able to take a new replica for the range (1 already has a voter, 0 already have a non-voter); likely not enough nodes in cluster
I230516 14:33:14.028389 7658 upgrade/upgrademanager/manager.go:238  [T1,n1] 638  the last permanent upgrade (v22.2-102) does not appear to have completed; attempting to run all upgrades
I230516 14:33:14.028534 7658 upgrade/upgrademanager/manager.go:283  [T1,n1] 639  running permanent upgrade for version 0.0-2
I230516 14:33:14.034182 8379 13@kv/kvserver/replicate_queue.go:839  [T1,n1,replicate,s1,r54/1:/Table/5{3-4}] 640  error processing replica: 0 of 1 live stores are able to take a new replica for the range (1 already has a voter, 0 already have a non-voter); likely not enough nodes in cluster
I230516 14:33:14.040251 8218 spanconfig/spanconfigmanager/manager.go:144  [T1,n1] 641  started auto span config reconciliation job
I230516 14:33:14.040316 8319 jobs/adopt.go:261  [T1,n1] 642  job 865655829350580225: resuming execution
I230516 14:33:14.056410 8347 jobs/registry.go:1584  [T1,n1] 643  AUTO SPAN CONFIG RECONCILIATION job 865655829350580225: stepping through state running
I230516 14:33:14.061913 7658 jobs/registry.go:616  [T1,n1] 644  active version is 23.1
I230516 14:33:14.070472 7658 upgrade/upgrademanager/manager.go:742  [T1,n1] 645  running Upgrade to 0.0-2: "add users and roles"
I230516 14:33:14.070547 8412 jobs/adopt.go:261  [T1,n1] 646  job 865655829393047553: resuming execution
I230516 14:33:14.079011 8501 13@kv/kvserver/replicate_queue.go:839  [T1,n1,replicate,s1,r34/1:/Table/3{2-3}] 647  error processing replica: 0 of 1 live stores are able to take a new replica for the range (1 already has a voter, 0 already have a non-voter); likely not enough nodes in cluster
I230516 14:33:14.089103 8416 jobs/registry.go:1584  [T1,n1] 648  MIGRATION job 865655829393047553: stepping through state running
I230516 14:33:14.102930 8556 13@kv/kvserver/replicate_queue.go:839  [T1,n1,replicate,s1,r8/1:/Table/{4-5}] 649  error processing replica: 0 of 1 live stores are able to take a new replica for the range (1 already has a voter, 0 already have a non-voter); likely not enough nodes in cluster
I230516 14:33:14.116280 8492 13@kv/kvserver/replicate_queue.go:839  [T1,n1,replicate,s1,r25/1:/Table/2{3-4}] 650  error processing replica: 0 of 1 live stores are able to take a new replica for the range (1 already has a voter, 0 already have a non-voter); likely not enough nodes in cluster
I230516 14:33:14.125652 8347 spanconfig/spanconfigsqlwatcher/sqlwatcher.go:246  [T1,n1,job=AUTO SPAN CONFIG RECONCILIATION id=865655829350580225] 651  established range feed over system.descriptors starting at time 1684247594.080574357,0
I230516 14:33:14.125719 8347 spanconfig/spanconfigsqlwatcher/sqlwatcher.go:305  [T1,n1,job=AUTO SPAN CONFIG RECONCILIATION id=865655829350580225] 652  established range feed over system.zones starting at time 1684247594.080574357,0
I230516 14:33:14.125803 8347 spanconfig/spanconfigsqlwatcher/sqlwatcher.go:401  [T1,n1,job=AUTO SPAN CONFIG RECONCILIATION id=865655829350580225] 653  established range feed over system.protected_ts_records starting at time 1684247594.080574357,0
I230516 14:33:14.127937 8416 jobs/registry.go:1584  [T1,n1] 654  MIGRATION job 865655829393047553: stepping through state succeeded
I230516 14:33:14.145518 7658 jobs/wait.go:145  [T1,n1] 655  waited for 1 [865655829393047553] queued jobs to complete 74.986222ms
I230516 14:33:14.145691 7658 upgrade/upgrademanager/manager.go:283  [T1,n1] 656  running permanent upgrade for version 0.0-4
I230516 14:33:14.163722 7658 jobs/registry.go:616  [T1,n1] 657  active version is 23.1
I230516 14:33:14.172092 7658 upgrade/upgrademanager/manager.go:742  [T1,n1] 658  running Upgrade to 0.0-4: "enable diagnostics reporting"
I230516 14:33:14.172163 8659 jobs/adopt.go:261  [T1,n1] 659  job 865655829776695297: resuming execution
I230516 14:33:14.186054 8619 jobs/registry.go:1584  [T1,n1] 660  MIGRATION job 865655829776695297: stepping through state running
I230516 14:33:14.203487 8651 util/log/event_log.go:32  [T1,n1,job=MIGRATION id=865655829776695297,upgrade=0.0-4,intExec=optInToDiagnosticsStatReporting] 661 ={"Timestamp":1684247594196845286,"EventType":"set_cluster_setting","Statement":"SET CLUSTER SETTING \"diagnostics.reporting.enabled\" = true","Tag":"SET CLUSTER SETTING","User":"root","ApplicationName":"$ internal-optInToDiagnosticsStatReporting","SettingName":"diagnostics.reporting.enabled","Value":"true"}
I230516 14:33:14.204881 8619 jobs/registry.go:1584  [T1,n1] 662  MIGRATION job 865655829776695297: stepping through state succeeded
I230516 14:33:14.225129 7658 jobs/wait.go:145  [T1,n1] 663  waited for 1 [865655829776695297] queued jobs to complete 52.982058ms
I230516 14:33:14.225440 7658 upgrade/upgrademanager/manager.go:283  [T1,n1] 664  running permanent upgrade for version 0.0-6
I230516 14:33:14.262438 7658 jobs/registry.go:616  [T1,n1] 665  active version is 23.1
I230516 14:33:14.273676 7658 upgrade/upgrademanager/manager.go:742  [T1,n1] 666  running Upgrade to 0.0-6: "populate initial version cluster setting table entry"
I230516 14:33:14.273754 8714 jobs/adopt.go:261  [T1,n1] 667  job 865655830038249473: resuming execution
I230516 14:33:14.287435 8718 jobs/registry.go:1584  [T1,n1] 668  MIGRATION job 865655830038249473: stepping through state running
I230516 14:33:14.301576 8735 13@kv/kvserver/replicate_queue.go:839  [T1,n1,replicate,s1,r51/1:/Table/5{0-1}] 669  error processing replica: 0 of 1 live stores are able to take a new replica for the range (1 already has a voter, 0 already have a non-voter); likely not enough nodes in cluster
I230516 14:33:14.303757 8718 jobs/registry.go:1584  [T1,n1] 670  MIGRATION job 865655830038249473: stepping through state succeeded
I230516 14:33:14.323057 7658 jobs/wait.go:145  [T1,n1] 671  waited for 1 [865655830038249473] queued jobs to complete 49.321988ms
I230516 14:33:14.323267 7658 upgrade/upgrademanager/manager.go:283  [T1,n1] 672  running permanent upgrade for version 0.0-8
I230516 14:33:14.340458 7658 jobs/registry.go:616  [T1,n1] 673  active version is 23.1
I230516 14:33:14.349192 7658 upgrade/upgrademanager/manager.go:742  [T1,n1] 674  running Upgrade to 0.0-8: "initialize the cluster.secret setting"
I230516 14:33:14.349303 8778 jobs/adopt.go:261  [T1,n1] 675  job 865655830358589441: resuming execution
I230516 14:33:14.365265 8805 jobs/registry.go:1584  [T1,n1] 676  MIGRATION job 865655830358589441: stepping through state running
I230516 14:33:14.377762 8805 jobs/registry.go:1584  [T1,n1] 677  MIGRATION job 865655830358589441: stepping through state succeeded
I230516 14:33:14.397817 7658 jobs/wait.go:145  [T1,n1] 678  waited for 1 [865655830358589441] queued jobs to complete 48.54355ms
I230516 14:33:14.398000 7658 upgrade/upgrademanager/manager.go:283  [T1,n1] 679  running permanent upgrade for version 0.0-10
I230516 14:33:14.415685 7658 jobs/registry.go:616  [T1,n1] 680  active version is 23.1
I230516 14:33:14.423703 7658 upgrade/upgrademanager/manager.go:742  [T1,n1] 681  running Upgrade to 0.0-10: "update system.locations with default location data"
I230516 14:33:14.423768 8909 jobs/adopt.go:261  [T1,n1] 682  job 865655830603464705: resuming execution
I230516 14:33:14.437104 8864 jobs/registry.go:1584  [T1,n1] 683  MIGRATION job 865655830603464705: stepping through state running
I230516 14:33:14.449867 8936 13@kv/kvserver/replicate_queue.go:839  [T1,n1,replicate,s1,r23/1:/Table/2{1-2}] 684  error processing replica: 0 of 1 live stores are able to take a new replica for the range (1 already has a voter, 0 already have a non-voter); likely not enough nodes in cluster
I230516 14:33:14.459977 8864 jobs/registry.go:1584  [T1,n1] 685  MIGRATION job 865655830603464705: stepping through state succeeded
I230516 14:33:14.477837 7658 jobs/wait.go:145  [T1,n1] 686  waited for 1 [865655830603464705] queued jobs to complete 54.084995ms
I230516 14:33:14.478018 7658 upgrade/upgrademanager/manager.go:283  [T1,n1] 687  running permanent upgrade for version 0.0-12
I230516 14:33:14.496808 7658 jobs/registry.go:616  [T1,n1] 688  active version is 23.1
I230516 14:33:14.505046 7658 upgrade/upgrademanager/manager.go:742  [T1,n1] 689  running Upgrade to 0.0-12: "create default databases"
I230516 14:33:14.505312 8985 jobs/adopt.go:261  [T1,n1] 690  job 865655830865674241: resuming execution
I230516 14:33:14.519332 8990 jobs/registry.go:1584  [T1,n1] 691  MIGRATION job 865655830865674241: stepping through state running
I230516 14:33:14.531648 8971 5@util/log/event_log.go:32  [T1,n1,job=MIGRATION id=865655830865674241,upgrade=0.0-12,intExec=create-default-DB] 692 ={"Timestamp":1684247594527364254,"EventType":"create_database","Statement":"CREATE DATABASE IF NOT EXISTS defaultdb","Tag":"CREATE DATABASE","User":"root","DescriptorID":100,"ApplicationName":"$ internal-create-default-DB","DatabaseName":"defaultdb"}
I230516 14:33:14.535275 8974 5@util/log/event_log.go:32  [T1,n1,job=MIGRATION id=865655830865674241,upgrade=0.0-12,intExec=create-default-DB] 693 ={"Timestamp":1684247594531976368,"EventType":"create_database","Statement":"CREATE DATABASE IF NOT EXISTS postgres","Tag":"CREATE DATABASE","User":"root","DescriptorID":102,"ApplicationName":"$ internal-create-default-DB","DatabaseName":"postgres"}
I230516 14:33:14.536887 8990 jobs/registry.go:1584  [T1,n1] 694  MIGRATION job 865655830865674241: stepping through state succeeded
I230516 14:33:14.542823 7915 3@pebble/event.go:734  [n1,s1,pebble] 695  [JOB 6] WAL created 000005
I230516 14:33:14.555151 7658 jobs/wait.go:145  [T1,n1] 696  waited for 1 [865655830865674241] queued jobs to complete 49.865005ms
I230516 14:33:14.555310 7658 upgrade/upgrademanager/manager.go:283  [T1,n1] 697  running permanent upgrade for version 22.1-42
I230516 14:33:14.573092 7658 jobs/registry.go:616  [T1,n1] 698  active version is 23.1
I230516 14:33:14.580806 7658 upgrade/upgrademanager/manager.go:742  [T1,n1] 699  running Upgrade to 22.1-42: "add default SQL schema telemetry schedule"
I230516 14:33:14.580885 9061 jobs/adopt.go:261  [T1,n1] 700  job 865655831118970881: resuming execution
I230516 14:33:14.594611 9066 jobs/registry.go:1584  [T1,n1] 701  MIGRATION job 865655831118970881: stepping through state running
I230516 14:33:14.605215 9066 jobs/registry.go:1584  [T1,n1] 702  MIGRATION job 865655831118970881: stepping through state succeeded
I230516 14:33:14.623060 7658 jobs/wait.go:145  [T1,n1] 703  waited for 1 [865655831118970881] queued jobs to complete 42.193552ms
I230516 14:33:14.623420 7658 upgrade/upgrademanager/manager.go:283  [T1,n1] 704  running permanent upgrade for version 22.2-32
I230516 14:33:14.638119 7658 jobs/registry.go:616  [T1,n1] 705  active version is 23.1
I230516 14:33:14.645907 7658 upgrade/upgrademanager/manager.go:742  [T1,n1] 706  running Upgrade to 22.2-32: "add tables and jobs to support persisting key visualizer samples"
I230516 14:33:14.645979 9189 jobs/adopt.go:261  [T1,n1] 707  job 865655831342153729: resuming execution
I230516 14:33:14.660972 9141 jobs/registry.go:1584  [T1,n1] 708  MIGRATION job 865655831342153729: stepping through state running
I230516 14:33:14.672108 9142 1@util/log/event_log.go:32  [T1,n1,job=MIGRATION id=865655831342153729,upgrade=22.2-32,intExec=set-SpanStatsTenantBoundariesTable-TTL] 709 ={"Timestamp":1684247594669620597,"EventType":"set_zone_config","Statement":"ALTER TABLE \"\".system.span_stats_tenant_boundaries CONFIGURE ZONE USING \"gc.ttlseconds\" = $1","Tag":"CONFIGURE ZONE","User":"root","DescriptorID":57,"ApplicationName":"$ internal-set-SpanStatsTenantBoundariesTable-TTL","PlaceholderValues":["3600"],"Target":"TABLE system.public.span_stats_tenant_boundaries","Options":["\"gc.ttlseconds\" = 3600"]}
I230516 14:33:14.680772 9141 jobs/registry.go:1584  [T1,n1] 710  MIGRATION job 865655831342153729: stepping through state succeeded
I230516 14:33:14.701962 7658 jobs/wait.go:145  [T1,n1] 711  waited for 1 [865655831342153729] queued jobs to complete 55.996256ms
I230516 14:33:14.702144 7658 upgrade/upgrademanager/manager.go:283  [T1,n1] 712  running permanent upgrade for version 22.2-38
I230516 14:33:14.720682 7658 jobs/registry.go:616  [T1,n1] 713  active version is 23.1
I230516 14:33:14.727962 7658 upgrade/upgrademanager/manager.go:742  [T1,n1] 714  running Upgrade to 22.2-38: "create jobs metrics polling job"
I230516 14:33:14.728033 9228 jobs/adopt.go:261  [T1,n1] 715  job 865655831600103425: resuming execution
I230516 14:33:14.741120 9282 jobs/registry.go:1584  [T1,n1] 716  MIGRATION job 865655831600103425: stepping through state running
I230516 14:33:14.764672 9282 jobs/registry.go:1584  [T1,n1] 717  MIGRATION job 865655831600103425: stepping through state succeeded
I230516 14:33:14.780995 7658 jobs/wait.go:145  [T1,n1] 718  waited for 1 [865655831600103425] queued jobs to complete 52.980543ms
I230516 14:33:14.781270 7658 upgrade/upgrademanager/manager.go:283  [T1,n1] 719  running permanent upgrade for version 22.2-90
I230516 14:33:14.797231 7658 jobs/registry.go:616  [T1,n1] 720  active version is 23.1
I230516 14:33:14.804762 7658 upgrade/upgrademanager/manager.go:742  [T1,n1] 721  running Upgrade to 22.2-90: "create auto config runner job"
I230516 14:33:14.804833 9353 jobs/adopt.go:261  [T1,n1] 722  job 865655831859494913: resuming execution
I230516 14:33:14.818318 9358 jobs/registry.go:1584  [T1,n1] 723  MIGRATION job 865655831859494913: stepping through state running
I230516 14:33:14.829329 9358 jobs/registry.go:616  [T1,n1,job=MIGRATION id=865655831859494913,upgrade=22.2-90] 724  active version is 23.1
I230516 14:33:14.837865 9400 jobs/adopt.go:261  [T1,n1,job=MIGRATION id=865655831859494913,upgrade=22.2-90] 725  job 102: resuming execution
I230516 14:33:14.839238 9358 jobs/registry.go:1584  [T1,n1] 726  MIGRATION job 865655831859494913: stepping through state succeeded
I230516 14:33:14.849718 9405 jobs/registry.go:1584  [T1,n1] 727  AUTO CONFIG RUNNER job 102: stepping through state running
I230516 14:33:14.855692 9405 server/autoconfig/auto_config.go:97  [T1,n1,job=AUTO CONFIG RUNNER id=102] 728  using provider with type acprovider.NoTaskProvider
I230516 14:33:14.855732 9405 server/autoconfig/auto_config.go:74  [T1,n1,job=AUTO CONFIG RUNNER id=102] 729  waiting for environment activation...
I230516 14:33:14.855744 9405 server/autoconfig/auto_config.go:74  [T1,n1,job=AUTO CONFIG RUNNER id=102] 730  waiting for environment activation...
I230516 14:33:14.856950 7658 jobs/wait.go:145  [T1,n1] 731  waited for 1 [865655831859494913] queued jobs to complete 52.133104ms
I230516 14:33:14.857083 7658 upgrade/upgrademanager/manager.go:283  [T1,n1] 732  running permanent upgrade for version 22.2-98
I230516 14:33:14.878807 7658 jobs/registry.go:616  [T1,n1] 733  active version is 23.1
I230516 14:33:14.887984 7658 upgrade/upgrademanager/manager.go:742  [T1,n1] 734  running Upgrade to 22.2-98: "change TTL for SQL Stats system tables"
I230516 14:33:14.888187 9469 jobs/adopt.go:261  [T1,n1] 735  job 865655832107745281: resuming execution
I230516 14:33:14.902206 9478 jobs/registry.go:1584  [T1,n1] 736  MIGRATION job 865655832107745281: stepping through state running
I230516 14:33:14.914522 9447 1@util/log/event_log.go:32  [T1,n1,job=MIGRATION id=865655832107745281,upgrade=22.2-98,intExec=set-SQLStatsTables-TTL] 737 ={"Timestamp":1684247594910598000,"EventType":"set_zone_config","Statement":"ALTER TABLE \"\".system.statement_statistics CONFIGURE ZONE USING \"gc.ttlseconds\" = $1","Tag":"CONFIGURE ZONE","User":"root","DescriptorID":42,"ApplicationName":"$ internal-set-SQLStatsTables-TTL","PlaceholderValues":["3600"],"Target":"TABLE system.public.statement_statistics","Options":["\"gc.ttlseconds\" = 3600"]}
I230516 14:33:14.918282 9456 1@util/log/event_log.go:32  [T1,n1,job=MIGRATION id=865655832107745281,upgrade=22.2-98,intExec=set-SQLStatsTables-TTL] 738 ={"Timestamp":1684247594914818665,"EventType":"set_zone_config","Statement":"ALTER TABLE \"\".system.transaction_statistics CONFIGURE ZONE USING \"gc.ttlseconds\" = $1","Tag":"CONFIGURE ZONE","User":"root","DescriptorID":43,"ApplicationName":"$ internal-set-SQLStatsTables-TTL","PlaceholderValues":["3600"],"Target":"TABLE system.public.transaction_statistics","Options":["\"gc.ttlseconds\" = 3600"]}
I230516 14:33:14.921649 9512 1@util/log/event_log.go:32  [T1,n1,job=MIGRATION id=865655832107745281,upgrade=22.2-98,intExec=set-SQLStatsTables-TTL] 739 ={"Timestamp":1684247594918545172,"EventType":"set_zone_config","Statement":"ALTER TABLE \"\".system.statement_activity CONFIGURE ZONE USING \"gc.ttlseconds\" = $1","Tag":"CONFIGURE ZONE","User":"root","DescriptorID":60,"ApplicationName":"$ internal-set-SQLStatsTables-TTL","PlaceholderValues":["3600"],"Target":"TABLE system.public.statement_activity","Options":["\"gc.ttlseconds\" = 3600"]}
I230516 14:33:14.924592 9499 1@util/log/event_log.go:32  [T1,n1,job=MIGRATION id=865655832107745281,upgrade=22.2-98,intExec=set-SQLStatsTables-TTL] 740 ={"Timestamp":1684247594921954433,"EventType":"set_zone_config","Statement":"ALTER TABLE \"\".system.transaction_activity CONFIGURE ZONE USING \"gc.ttlseconds\" = $1","Tag":"CONFIGURE ZONE","User":"root","DescriptorID":61,"ApplicationName":"$ internal-set-SQLStatsTables-TTL","PlaceholderValues":["3600"],"Target":"TABLE system.public.transaction_activity","Options":["\"gc.ttlseconds\" = 3600"]}
I230516 14:33:14.929309 9478 jobs/registry.go:1584  [T1,n1] 741  MIGRATION job 865655832107745281: stepping through state succeeded
I230516 14:33:14.948558 7658 jobs/wait.go:145  [T1,n1] 742  waited for 1 [865655832107745281] queued jobs to complete 60.388972ms
I230516 14:33:14.948757 7658 upgrade/upgrademanager/manager.go:283  [T1,n1] 743  running permanent upgrade for version 22.2-102
I230516 14:33:14.972048 7658 jobs/registry.go:616  [T1,n1] 744  active version is 23.1
I230516 14:33:14.983465 9594 2@rpc/context.go:2565  [T1,n1,rnode=1,raddr=127.0.0.1:44787,class=default,rpc] 745  connection is now ready
I230516 14:33:14.988970 7658 upgrade/upgrademanager/manager.go:742  [T1,n1] 746  running Upgrade to 22.2-102: "create sql activity updater job"
I230516 14:33:14.989032 9583 jobs/adopt.go:261  [T1,n1] 747  job 865655832408162305: resuming execution
I230516 14:33:15.002971 9545 jobs/registry.go:1584  [T1,n1] 748  MIGRATION job 865655832408162305: stepping through state running
I230516 14:33:15.019490 9545 jobs/registry.go:1584  [T1,n1] 749  MIGRATION job 865655832408162305: stepping through state succeeded
I230516 14:33:15.042741 7658 jobs/wait.go:145  [T1,n1] 750  waited for 1 [865655832408162305] queued jobs to complete 53.724368ms
I230516 14:33:15.042793 7658 server/server_sql.go:1615  [T1,n1] 751  done ensuring all necessary startup migrations have run
I230516 14:33:15.043300 9717 kv/kvclient/rangefeed/rangefeedcache/watcher.go:335  [T1,n1] 752  tenant-settings-watcher: established range feed cache
I230516 14:33:15.046089 9695 jobs/job_scheduler.go:407  [T1,n1] 753  waiting 3m0s before scheduled jobs daemon start
I230516 14:33:15.047951 9719 kv/kvclient/rangefeed/rangefeedcache/watcher.go:335  [T1,n1] 754  tenant-capability-watcher: established range feed cache
I230516 14:33:15.047970 9748 13@kv/kvserver/replicate_queue.go:839  [T1,n1,replicate,s1,r58/1:/Table/5{7-8}] 755  error processing replica: 0 of 1 live stores are able to take a new replica for the range (1 already has a voter, 0 already have a non-voter); likely not enough nodes in cluster
I230516 14:33:15.048808 9704 13@kv/kvserver/replicate_queue.go:839  [T1,n1,replicate,s1,r12/1:/Table/{8-11}] 756  error processing replica: 0 of 1 live stores are able to take a new replica for the range (1 already has a voter, 0 already have a non-voter); likely not enough nodes in cluster
I230516 14:33:15.049034 9719 multitenant/tenantcapabilities/tenantcapabilitieswatcher/watcher.go:149  [T1,n1] 757  received results of a full table scan for tenant capabilities
I230516 14:33:15.049303 9715 server/auto_upgrade.go:78  [T1,n1] 758  no need to upgrade, cluster already at the newest version
I230516 14:33:15.055884 9752 13@kv/kvserver/replicate_queue.go:839  [T1,n1,replicate,s1,r52/1:/Table/5{1-2}] 759  error processing replica: 0 of 1 live stores are able to take a new replica for the range (1 already has a voter, 0 already have a non-voter); likely not enough nodes in cluster
I230516 14:33:15.057346 9697 sql/syntheticprivilegecache/cache.go:199  [T1,n1] 760  warmed privileges for virtual tables in 11.054109ms
I230516 14:33:15.078450 9762 13@kv/kvserver/replicate_queue.go:839  [T1,n1,replicate,s1,r4/1:/System{/tsd-tse}] 761  error processing replica: 0 of 1 live stores are able to take a new replica for the range (1 already has a voter, 0 already have a non-voter); likely not enough nodes in cluster
I230516 14:33:15.093990 7882 3@pebble/event.go:734  [n1,s1,pebble] 762  [JOB 7] WAL created 000006
I230516 14:33:15.096132 7658 1@server/server_sql.go:1733  [T1,n1] 763  serving sql connections
I230516 14:33:15.097831 7658 testutils/testcluster/testcluster.go:1357  [-] 764  WaitForFullReplication
I230516 14:33:15.097869 7658 testutils/testcluster/testcluster.go:1361  [-] 765  WaitForFullReplication took: 370ns
    autocommit_extended_protocol_test.go:197: 
            Error Trace:    github.com/cockroachdb/cockroach/pkg/sql/tests/autocommit_extended_protocol_test.go:197
            Error:          An error is expected but got nil.
            Test:           TestErrorDuringExtendedProtocolCommit
I230516 14:33:15.110872 7658 testutils/testcluster/testcluster.go:145  [-] 766  TestCluster quiescing nodes
I230516 14:33:15.111025 8209 sql/stats/automatic_stats.go:550  [T1,n1] 767  quiescing auto stats refresher
W230516 14:33:15.111284 9006 kv/kvserver/intentresolver/intent_resolver.go:826  [-] 768  failed to gc transaction record: could not GC completed transaction anchored at /Table/3/1/102/2/1: node unavailable; try another peer
I230516 14:33:15.111294 7849 server/start_listen.go:103  [T1,n1] 769  server shutting down: instructing cmux to stop accepting
I230516 14:33:15.111316 9594 rpc/context.go:2302  [T1,n1,rnode=1,raddr=127.0.0.1:44787,class=default,rpc] 770  connection heartbeat loop ended with err: <nil>
W230516 14:33:15.111732 8190 sql/sqlliveness/slinstance/slinstance.go:334  [T1,n1] 771  exiting heartbeat loop
W230516 14:33:15.111760 8190 sql/sqlliveness/slinstance/slinstance.go:321  [T1,n1] 772  exiting heartbeat loop with error: node unavailable; try another peer
E230516 14:33:15.111784 8190 server/server_sql.go:507  [T1,n1] 773  failed to run update of instance with new session ID: node unavailable; try another peer
I230516 14:33:15.111784 8347 jobs/registry.go:1584  [T1,n1] 774  AUTO SPAN CONFIG RECONCILIATION job 865655829350580225: stepping through state succeeded
E230516 14:33:15.111813 9405 jobs/registry.go:989  [T1,n1] 775  error getting live session: node unavailable; try another peer
I230516 14:33:15.116760 7658 server/server_controller_orchestration.go:263  [T1,n1] 776  server controller shutting down ungracefully
I230516 14:33:15.116804 7658 server/server_controller_orchestration.go:274  [T1,n1] 777  waiting for tenant servers to report stopped
W230516 14:33:15.116877 7658 server/server_sql.go:1693  [T1,n1] 778  server shutdown without a prior graceful drain
--- FAIL: TestErrorDuringExtendedProtocolCommit (1.23s)
=== RUN   TestFileUpload
msirek commented 1 year ago

Did not reproduce this running the test under stress for 15 minutes

rafiss commented 1 year ago

@msirek we addressed these flakes on release-23.1 and master already, but did not fix this branch since it was frozen by the time we made the fixes. This branch is going away now, so closing this.