cockroachdb / cockroach

CockroachDB — the cloud native, distributed SQL database designed for high availability, effortless scale, and control over data placement.
https://www.cockroachlabs.com
Other
30.18k stars 3.81k forks source link

teamcity: failed tests on master: testrace/TestInitialPartitioning #28828

Closed cockroach-teamcity closed 6 years ago

cockroach-teamcity commented 6 years ago

The following tests appear to have failed:

#849577:

--- FAIL: testrace/TestInitialPartitioning (236.700s)

------- Stdout: -------
W180820 14:29:52.331544 923 server/status/runtime.go:294  [n?] Could not parse build timestamp: parsing time "" as "2006/01/02 15:04:05": cannot parse "" as "2006"
I180820 14:29:52.372650 923 server/server.go:830  [n?] monitoring forward clock jumps based on server.clock.forward_jump_check_enabled
I180820 14:29:52.373171 923 base/addr_validation.go:260  [n?] server certificate addresses: IP=127.0.0.1,::1; DNS=localhost,*.local; CN=node
I180820 14:29:52.373251 923 base/addr_validation.go:300  [n?] web UI certificate addresses: IP=127.0.0.1,::1; DNS=localhost,*.local; CN=node
I180820 14:29:52.391956 923 server/config.go:496  [n?] 1 storage engine initialized
I180820 14:29:52.392064 923 server/config.go:499  [n?] RocksDB cache size: 128 MiB
I180820 14:29:52.392112 923 server/config.go:499  [n?] store 0: in-memory, size 0 B
I180820 14:29:52.566790 923 server/node.go:373  [n?] **** cluster 86a4ec18-704c-48fd-baaf-3c74b0d02478 has been created
I180820 14:29:52.566904 923 server/server.go:1401  [n?] **** add additional nodes by specifying --join=127.0.0.1:44425
I180820 14:29:52.567891 923 gossip/gossip.go:383  [n1] NodeDescriptor set to node_id:1 address:<network_field:"tcp" address_field:"127.0.0.1:44425" > attrs:<> locality:<> ServerVersion:<major_val:2 minor_val:0 patch:0 unstable:11 > build_tag:"v2.1.0-alpha.20180702-1794-gfbc2970" started_at:1534775392567557493 
I180820 14:29:52.625774 923 server/node.go:476  [n1] initialized store [n1,s1]: disk (capacity=512 MiB, available=512 MiB, used=0 B, logicalBytes=6.9 KiB), ranges=1, leases=1, queries=0.00, writes=0.00, bytesPerReplica={p10=7103.00 p25=7103.00 p50=7103.00 p75=7103.00 p90=7103.00 pMax=7103.00}, writesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}
I180820 14:29:52.626107 923 storage/stores.go:242  [n1] read 0 node addresses from persistent storage
I180820 14:29:52.626506 923 server/node.go:697  [n1] connecting to gossip network to verify cluster ID...
I180820 14:29:52.626650 923 server/node.go:722  [n1] node connected via gossip and verified as part of cluster "86a4ec18-704c-48fd-baaf-3c74b0d02478"
I180820 14:29:52.626983 923 server/node.go:546  [n1] node=1: started with [n1=<in-mem>] engine(s) and attributes []
I180820 14:29:52.628216 923 server/status/recorder.go:652  [n1] available memory from cgroups (8.0 EiB) exceeds system memory 16 GiB, using system memory
I180820 14:29:52.628288 923 server/server.go:1807  [n1] Could not start heap profiler worker due to: directory to store profiles could not be determined
I180820 14:29:52.637695 1239 server/status/recorder.go:652  [n1,summaries] available memory from cgroups (8.0 EiB) exceeds system memory 16 GiB, using system memory
I180820 14:29:52.643028 974 storage/replica_command.go:282  [split,n1,s1,r1/1:/M{in-ax}] initiating a split of this range at key /System/"" [r2]
I180820 14:29:52.650038 923 server/server.go:1538  [n1] starting https server at 127.0.0.1:33449 (use: 127.0.0.1:33449)
I180820 14:29:52.650141 923 server/server.go:1540  [n1] starting grpc/postgres server at 127.0.0.1:44425
I180820 14:29:52.650193 923 server/server.go:1541  [n1] advertising CockroachDB node at 127.0.0.1:44425
W180820 14:29:52.650424 923 jobs/registry.go:308  [n1] unable to get node liveness: node not in the liveness table
I180820 14:29:52.784400 1232 storage/replica_command.go:282  [split,n1,s1,r2/1:/{System/-Max}] initiating a split of this range at key /System/NodeLiveness [r3]
I180820 14:29:52.922962 1069 sql/event_log.go:126  [n1,intExec=optInToDiagnosticsStatReporting] Event: "set_cluster_setting", target: 0, info: {SettingName:diagnostics.reporting.enabled Value:true User:root}
I180820 14:29:52.985591 957 rpc/nodedialer/nodedialer.go:92  [consistencyChecker,n1,s1,r1/1:/{Min-System/}] connection to n1 established
I180820 14:29:53.190229 1018 storage/replica_command.go:282  [split,n1,s1,r3/1:/{System/NodeL…-Max}] initiating a split of this range at key /System/NodeLivenessMax [r4]
I180820 14:29:53.293338 1257 storage/replica_command.go:282  [split,n1,s1,r4/1:/{System/NodeL…-Max}] initiating a split of this range at key /System/tsd [r5]
I180820 14:29:53.395408 1302 storage/replica_command.go:282  [split,n1,s1,r5/1:/{System/tsd-Max}] initiating a split of this range at key /System/"tse" [r6]
I180820 14:29:53.459057 1359 storage/replica_command.go:282  [split,n1,s1,r6/1:/{System/tse-Max}] initiating a split of this range at key /Table/SystemConfigSpan/Start [r7]
W180820 14:29:53.477363 1338 storage/intent_resolver.go:668  [n1,s1] failed to push during intent resolution: failed to push "sql txn" id=5bc8e023 key=/Table/SystemConfigSpan/Start rw=true pri=0.02641901 iso=SERIALIZABLE stat=PENDING epo=0 ts=1534775393.290902640,0 orig=1534775393.290902640,0 max=1534775393.290902640,0 wto=false rop=false seq=12
I180820 14:29:53.487215 1151 sql/event_log.go:126  [n1,intExec=set-setting] Event: "set_cluster_setting", target: 0, info: {SettingName:version Value:$1 User:root}
I180820 14:29:53.617599 1308 storage/replica_command.go:282  [split,n1,s1,r7/1:/{Table/System…-Max}] initiating a split of this range at key /Table/11 [r8]
I180820 14:29:53.641870 1365 sql/event_log.go:126  [n1,intExec=disableNetTrace] Event: "set_cluster_setting", target: 0, info: {SettingName:trace.debug.enable Value:false User:root}
I180820 14:29:53.705977 1278 storage/replica_command.go:282  [split,n1,s1,r8/1:/{Table/11-Max}] initiating a split of this range at key /Table/12 [r9]
I180820 14:29:53.755448 1292 storage/replica_command.go:282  [split,n1,s1,r9/1:/{Table/12-Max}] initiating a split of this range at key /Table/13 [r10]
I180820 14:29:53.821207 1427 storage/replica_command.go:282  [split,n1,s1,r10/1:/{Table/13-Max}] initiating a split of this range at key /Table/14 [r11]
I180820 14:29:53.904053 1448 storage/replica_command.go:282  [split,n1,s1,r11/1:/{Table/14-Max}] initiating a split of this range at key /Table/15 [r12]
I180820 14:29:53.960744 1444 sql/event_log.go:126  [n1,intExec=initializeClusterSecret] Event: "set_cluster_setting", target: 0, info: {SettingName:cluster.secret Value:gen_random_uuid()::STRING User:root}
I180820 14:29:53.972092 1467 storage/replica_command.go:282  [split,n1,s1,r12/1:/{Table/15-Max}] initiating a split of this range at key /Table/16 [r13]
I180820 14:29:54.046506 1429 sql/event_log.go:126  [n1,intExec=create-default-db] Event: "create_database", target: 50, info: {DatabaseName:defaultdb Statement:CREATE DATABASE IF NOT EXISTS defaultdb User:root}
I180820 14:29:54.066967 1479 storage/replica_command.go:282  [split,n1,s1,r13/1:/{Table/16-Max}] initiating a split of this range at key /Table/17 [r14]
I180820 14:29:54.097691 1506 sql/event_log.go:126  [n1,intExec=create-default-db] Event: "create_database", target: 51, info: {DatabaseName:postgres Statement:CREATE DATABASE IF NOT EXISTS postgres User:root}
I180820 14:29:54.123747 1421 storage/replica_command.go:282  [split,n1,s1,r14/1:/{Table/17-Max}] initiating a split of this range at key /Table/18 [r15]
I180820 14:29:54.147843 923 server/server.go:1594  [n1] done ensuring all necessary migrations have run
I180820 14:29:54.147969 923 server/server.go:1597  [n1] serving sql connections
I180820 14:29:54.187426 1497 sql/event_log.go:126  [n1] Event: "node_join", target: 1, info: {Descriptor:{NodeID:1 Address:{NetworkField:tcp AddressField:127.0.0.1:44425} Attrs: Locality: ServerVersion:2.0-11 BuildTag:v2.1.0-alpha.20180702-1794-gfbc2970 StartedAt:1534775392567557493 LocalityAddress:[]} ClusterID:86a4ec18-704c-48fd-baaf-3c74b0d02478 StartedAt:1534775392567557493 LastUp:1534775392567557493}
I180820 14:29:54.197601 1495 server/server_update.go:67  [n1] no need to upgrade, cluster already at the newest version
I180820 14:29:54.217850 1522 storage/replica_command.go:282  [split,n1,s1,r15/1:/{Table/18-Max}] initiating a split of this range at key /Table/19 [r16]
I180820 14:29:54.273572 1520 storage/replica_command.go:282  [split,n1,s1,r16/1:/{Table/19-Max}] initiating a split of this range at key /Table/20 [r17]
I180820 14:29:54.347263 1529 storage/replica_command.go:282  [split,n1,s1,r17/1:/{Table/20-Max}] initiating a split of this range at key /Table/21 [r18]
I180820 14:29:54.421795 1542 storage/replica_command.go:282  [split,n1,s1,r18/1:/{Table/21-Max}] initiating a split of this range at key /Table/22 [r19]
I180820 14:29:54.480060 1536 storage/replica_command.go:282  [split,n1,s1,r19/1:/{Table/22-Max}] initiating a split of this range at key /Table/23 [r20]
I180820 14:29:54.534478 1666 storage/replica_command.go:282  [split,n1,s1,r20/1:/{Table/23-Max}] initiating a split of this range at key /Table/50 [r21]
I180820 14:29:54.608795 1640 storage/replica_command.go:282  [split,n1,s1,r21/1:/{Table/50-Max}] initiating a split of this range at key /Table/51 [r22]
W180820 14:29:54.912746 923 server/status/runtime.go:294  [n?] Could not parse build timestamp: parsing time "" as "2006/01/02 15:04:05": cannot parse "" as "2006"
I180820 14:29:54.987947 923 server/server.go:830  [n?] monitoring forward clock jumps based on server.clock.forward_jump_check_enabled
I180820 14:29:54.988490 923 base/addr_validation.go:260  [n?] server certificate addresses: IP=127.0.0.1,::1; DNS=localhost,*.local; CN=node
I180820 14:29:54.988603 923 base/addr_validation.go:300  [n?] web UI certificate addresses: IP=127.0.0.1,::1; DNS=localhost,*.local; CN=node
I180820 14:29:55.009733 923 server/config.go:496  [n?] 1 storage engine initialized
I180820 14:29:55.009852 923 server/config.go:499  [n?] RocksDB cache size: 128 MiB
I180820 14:29:55.009903 923 server/config.go:499  [n?] store 0: in-memory, size 0 B
W180820 14:29:55.010313 923 gossip/gossip.go:1351  [n?] no incoming or outgoing connections
I180820 14:29:55.011460 923 server/server.go:1403  [n?] no stores bootstrapped and --join flag specified, awaiting init command.
I180820 14:29:55.107942 1670 gossip/client.go:129  [n?] started gossip client to 127.0.0.1:44425
I180820 14:29:55.110501 1708 gossip/server.go:219  [n1] received initial cluster-verification connection from {tcp 127.0.0.1:33797}
I180820 14:29:55.115521 923 server/node.go:697  [n?] connecting to gossip network to verify cluster ID...
I180820 14:29:55.115728 923 server/node.go:722  [n?] node connected via gossip and verified as part of cluster "86a4ec18-704c-48fd-baaf-3c74b0d02478"
I180820 14:29:55.118062 1680 kv/dist_sender.go:345  [n?] unable to determine this node's attributes for replica selection; node is most likely bootstrapping
I180820 14:29:55.127908 1679 kv/dist_sender.go:345  [n?] unable to determine this node's attributes for replica selection; node is most likely bootstrapping
I180820 14:29:55.136966 923 kv/dist_sender.go:345  [n?] unable to determine this node's attributes for replica selection; node is most likely bootstrapping
I180820 14:29:55.142713 923 server/node.go:428  [n?] new node allocated ID 2
I180820 14:29:55.143153 923 gossip/gossip.go:383  [n2] NodeDescriptor set to node_id:2 address:<network_field:"tcp" address_field:"127.0.0.1:33797" > attrs:<> locality:<> ServerVersion:<major_val:2 minor_val:0 patch:0 unstable:11 > build_tag:"v2.1.0-alpha.20180702-1794-gfbc2970" started_at:1534775395142826817 
I180820 14:29:55.143937 923 storage/stores.go:242  [n2] read 0 node addresses from persistent storage
I180820 14:29:55.144224 923 storage/stores.go:261  [n2] wrote 1 node addresses to persistent storage
I180820 14:29:55.149896 1796 storage/stores.go:261  [n1] wrote 1 node addresses to persistent storage
I180820 14:29:55.213658 923 server/node.go:672  [n2] bootstrapped store [n2,s2]
I180820 14:29:55.215422 923 server/node.go:546  [n2] node=2: started with [] engine(s) and attributes []
I180820 14:29:55.216683 923 server/status/recorder.go:652  [n2] available memory from cgroups (8.0 EiB) exceeds system memory 16 GiB, using system memory
I180820 14:29:55.218222 923 server/server.go:1807  [n2] Could not start heap profiler worker due to: directory to store profiles could not be determined
I180820 14:29:55.222795 923 server/server.go:1538  [n2] starting https server at 127.0.0.1:43499 (use: 127.0.0.1:43499)
I180820 14:29:55.222907 923 server/server.go:1540  [n2] starting grpc/postgres server at 127.0.0.1:33797
I180820 14:29:55.222960 923 server/server.go:1541  [n2] advertising CockroachDB node at 127.0.0.1:33797
I180820 14:29:55.229708 1903 server/status/recorder.go:652  [n2,summaries] available memory from cgroups (8.0 EiB) exceeds system memory 16 GiB, using system memory
I180820 14:29:55.239081 923 server/server.go:1594  [n2] done ensuring all necessary migrations have run
I180820 14:29:55.239234 923 server/server.go:1597  [n2] serving sql connections
I180820 14:29:55.291652 1606 storage/replica_raftstorage.go:538  [replicate,n1,s1,r19/1:/Table/2{2-3}] generated preemptive snapshot 83cef108 at index 18
I180820 14:29:55.375822 1765 server/server_update.go:67  [n2] no need to upgrade, cluster already at the newest version
I180820 14:29:55.391325 1767 sql/event_log.go:126  [n2] Event: "node_join", target: 2, info: {Descriptor:{NodeID:2 Address:{NetworkField:tcp AddressField:127.0.0.1:33797} Attrs: Locality: ServerVersion:2.0-11 BuildTag:v2.1.0-alpha.20180702-1794-gfbc2970 StartedAt:1534775395142826817 LocalityAddress:[]} ClusterID:86a4ec18-704c-48fd-baaf-3c74b0d02478 StartedAt:1534775395142826817 LastUp:1534775395142826817}
I180820 14:29:55.394072 1943 rpc/nodedialer/nodedialer.go:92  [ct-client] connection to n1 established
I180820 14:29:55.406191 1606 rpc/nodedialer/nodedialer.go:92  [replicate,n1,s1,r19/1:/Table/2{2-3}] connection to n2 established
I180820 14:29:55.424440 1606 storage/store_snapshot.go:655  [replicate,n1,s1,r19/1:/Table/2{2-3}] streamed snapshot to (n2,s2):?: kv pairs: 8, log entries: 8, rate-limit: 2.0 MiB/sec, 18ms
I180820 14:29:55.432283 1973 storage/replica_raftstorage.go:782  [n2,s2,r19/?:{-}] applying preemptive snapshot at index 18 (id=83cef108, encoded size=2247, 1 rocksdb batches, 8 log entries)
I180820 14:29:55.455843 1973 storage/replica_raftstorage.go:788  [n2,s2,r19/?:/Table/2{2-3}] applied preemptive snapshot in 22ms [clear=0ms batch=0ms entries=2ms commit=2ms]
I180820 14:29:55.473168 1606 storage/replica_command.go:792  [replicate,n1,s1,r19/1:/Table/2{2-3}] change replicas (ADD_REPLICA (n2,s2):2): read existing descriptor r19:/Table/2{2-3} [(n1,s1):1, next=2, gen=842374548136]
W180820 14:29:55.489477 923 server/status/runtime.go:294  [n?] Could not parse build timestamp: parsing time "" as "2006/01/02 15:04:05": cannot parse "" as "2006"
I180820 14:29:55.507867 1606 storage/replica.go:3749  [n1,s1,r19/1:/Table/2{2-3}] proposing ADD_REPLICA((n2,s2):2): updated=[(n1,s1):1 (n2,s2):2] next=3
I180820 14:29:55.551345 1997 storage/replica_range_lease.go:554  [replicate,n1,s1,r19/1:/Table/2{2-3}] transferring lease to s2
I180820 14:29:55.561773 1772 util/protoutil/randnullability.go:94  inserting null for (roachpb.Lease).Expiration: false
I180820 14:29:55.561978 1772 util/protoutil/randnullability.go:94  inserting null for (roachpb.Lease).DeprecatedStartStasis: true
I180820 14:29:55.570786 1793 storage/replica_proposal.go:214  [n2,s2,r19/2:/Table/2{2-3}] new range lease repl=(n2,s2):2 seq=3 start=1534775395.560797889,0 epo=1 pro=1534775395.560818771,0 following repl=(n1,s1):1 seq=2 start=1534775392.574506849,0 exp=1534775401.611051857,0 pro=1534775392.611086983,0
I180820 14:29:55.572985 1997 storage/replica_range_lease.go:617  [replicate,n1,s1,r19/1:/Table/2{2-3}] done transferring lease to s2: <nil>
I180820 14:29:55.575129 1998 storage/replica_raftstorage.go:538  [replicate,n1,s1,r15/1:/Table/1{8-9}] generated preemptive snapshot ff3bf0b8 at index 19
I180820 14:29:55.580971 1998 storage/store_snapshot.go:655  [replicate,n1,s1,r15/1:/Table/1{8-9}] streamed snapshot to (n2,s2):?: kv pairs: 8, log entries: 9, rate-limit: 2.0 MiB/sec, 5ms
I180820 14:29:55.592879 1966 storage/replica_raftstorage.go:782  [n2,s2,r15/?:{-}] applying preemptive snapshot at index 19 (id=ff3bf0b8, encoded size=2785, 1 rocksdb batches, 9 log entries)
I180820 14:29:55.596463 923 server/server.go:830  [n?] monitoring forward clock jumps based on server.clock.forward_jump_check_enabled
I180820 14:29:55.597760 1966 storage/replica_raftstorage.go:788  [n2,s2,r15/?:/Table/1{8-9}] applied preemptive snapshot in 5ms [clear=0ms batch=0ms entries=4ms commit=0ms]
I180820 14:29:55.605363 923 base/addr_validation.go:260  [n?] server certificate addresses: IP=127.0.0.1,::1; DNS=localhost,*.local; CN=node
I180820 14:29:55.605535 923 base/addr_validation.go:300  [n?] web UI certificate addresses: IP=127.0.0.1,::1; DNS=localhost,*.local; CN=node
I180820 14:29:55.611194 923 server/config.go:496  [n?] 1 storage engine initialized
I180820 14:29:55.616305 923 server/config.go:499  [n?] RocksDB cache size: 128 MiB
I180820 14:29:55.620854 923 server/config.go:499  [n?] store 0: in-memory, size 0 B
W180820 14:29:55.621172 923 gossip/gossip.go:1351  [n?] no incoming or outgoing connections
I180820 14:29:55.621855 923 server/server.go:1403  [n?] no stores bootstrapped and --join flag specified, awaiting init command.
I180820 14:29:55.623475 1998 storage/replica_command.go:792  [replicate,n1,s1,r15/1:/Table/1{8-9}] change replicas (ADD_REPLICA (n2,s2):2): read existing descriptor r15:/Table/1{8-9} [(n1,s1):1, next=2, gen=842354473944]
I180820 14:29:55.630766 1967 storage/replica_command.go:792  [replicate,n2,s2,r19/2:/Table/2{2-3}] change replicas (REMOVE_REPLICA (n1,s1):1): read existing descriptor r19:/Table/2{2-3} [(n1,s1):1, (n2,s2):2, next=3, gen=842352980104]
I180820 14:29:55.716045 1998 storage/replica.go:3749  [n1,s1,r15/1:/Table/1{8-9}] proposing ADD_REPLICA((n2,s2):2): updated=[(n1,s1):1 (n2,s2):2] next=3
I180820 14:29:55.788012 1967 storage/replica.go:3749  [n2,s2,r19/2:/Table/2{2-3}] proposing REMOVE_REPLICA((n1,s1):1): updated=[(n2,s2):2] next=3
I180820 14:29:55.789917 1952 storage/replica_range_lease.go:554  [replicate,n1,s1,r15/1:/Table/1{8-9}] transferring lease to s2
I180820 14:29:55.806642 1920 storage/store.go:2612  [replicaGC,n1,s1,r19/1:/Table/2{2-3}] removing replica
I180820 14:29:55.808199 1952 storage/replica_range_lease.go:617  [replicate,n1,s1,r15/1:/Table/1{8-9}] done transferring lease to s2: <nil>
I180820 14:29:55.808477 1920 storage/replica.go:880  [replicaGC,n1,s1,r19/1:/Table/2{2-3}] removed 7 (0+7) keys in 1ms [clear=1ms commit=0ms]
I180820 14:29:55.809899 1827 storage/replica_proposal.go:214  [n2,s2,r15/2:/Table/1{8-9}] new range lease repl=(n2,s2):2 seq=3 start=1534775395.790033760,0 epo=1 pro=1534775395.790047354,0 following repl=(n1,s1):1 seq=2 start=1534775392.574506849,0 exp=1534775401.611051857,0 pro=1534775392.611086983,0
I180820 14:29:55.810698 1928 storage/store.go:3717  [n1,s1,r19/1:/Table/2{2-3}] added to replica GC queue (peer suggestion)
I180820 14:29:55.813061 1928 storage/store.go:3717  [n1,s1,r19/1:/Table/2{2-3}] added to replica GC queue (peer suggestion)
I180820 14:29:55.812516 2084 storage/replica_raftstorage.go:538  [replicate,n1,s1,r1/1:/{Min-System/}] generated preemptive snapshot 88d211e1 at index 74
I180820 14:29:55.819117 2084 storage/store_snapshot.go:655  [replicate,n1,s1,r1/1:/{Min-System/}] streamed snapshot to (n2,s2):?: kv pairs: 57, log entries: 64, rate-limit: 2.0 MiB/sec, 5ms
I180820 14:29:55.821454 2148 storage/replica_raftstorage.go:782  [n2,s2,r1/?:{-}] applying preemptive snapshot at index 74 (id=88d211e1, encoded size=17281, 1 rocksdb batches, 64 log entries)
I180820 14:29:55.830666 2148 storage/replica_raftstorage.go:788  [n2,s2,r1/?:/{Min-System/}] applied preemptive snapshot in 9ms [clear=0ms batch=0ms entries=8ms commit=1ms]
I180820 14:29:55.833655 2084 storage/replica_command.go:792  [replicate,n1,s1,r1/1:/{Min-System/}] change replicas (ADD_REPLICA (n2,s2):2): read existing descriptor r1:/{Min-System/} [(n1,s1):1, next=2, gen=842354808280]
I180820 14:29:55.854237 1776 gossip/client.go:129  [n?] started gossip client to 127.0.0.1:44425
I180820 14:29:55.856766 2090 gossip/server.go:219  [n1] received initial cluster-verification connection from {tcp 127.0.0.1:44605}
I180820 14:29:55.872141 2129 storage/replica_command.go:792  [replicate,n2,s2,r15/2:/Table/1{8-9}] change replicas (REMOVE_REPLICA (n1,s1):1): read existing descriptor r15:/Table/1{8-9} [(n1,s1):1, (n2,s2):2, next=3, gen=842353159288]
I180820 14:29:55.877241 923 server/node.go:697  [n?] connecting to gossip network to verify cluster ID...
I180820 14:29:55.877508 923 server/node.go:722  [n?] node connected via gossip and verified as part of cluster "86a4ec18-704c-48fd-baaf-3c74b0d02478"
I180820 14:29:55.880120 2171 kv/dist_sender.go:345  [n?] unable to determine this node's attributes for replica selection; node is most likely bootstrapping
I180820 14:29:55.888689 2170 kv/dist_sender.go:345  [n?] unable to determine this node's attributes for replica selection; node is most likely bootstrapping
I180820 14:29:55.893405 923 kv/dist_sender.go:345  [n?] unable to determine this node's attributes for replica selection; node is most likely bootstrapping
I180820 14:29:55.893937 2084 storage/replica.go:3749  [n1,s1,r1/1:/{Min-System/}] proposing ADD_REPLICA((n2,s2):2): updated=[(n1,s1):1 (n2,s2):2] next=3
I180820 14:29:55.926267 923 server/node.go:428  [n?] new node allocated ID 3
I180820 14:29:55.926861 923 gossip/gossip.go:383  [n3] NodeDescriptor set to node_id:3 address:<network_field:"tcp" address_field:"127.0.0.1:44605" > attrs:<> locality:<> ServerVersion:<major_val:2 minor_val:0 patch:0 unstable:11 > build_tag:"v2.1.0-alpha.20180702-1794-gfbc2970" started_at:1534775395926519263 
I180820 14:29:55.928220 923 storage/stores.go:242  [n3] read 0 node addresses from persistent storage
I180820 14:29:55.928535 923 storage/stores.go:261  [n3] wrote 2 node addresses to persistent storage
I180820 14:29:55.933449 2106 storage/replica_range_lease.go:554  [replicate,n1,s1,r1/1:/{Min-System/}] transferring lease to s2
I180820 14:29:55.939553 2210 storage/stores.go:261  [n1] wrote 2 node addresses to persistent storage
I180820 14:29:55.944227 2211 storage/stores.go:261  [n2] wrote 2 node addresses to persistent storage
I180820 14:29:55.949541 2106 storage/replica_range_lease.go:617  [replicate,n1,s1,r1/1:/{Min-System/}] done transferring lease to s2: <nil>
I180820 14:29:55.951745 2188 storage/replica_raftstorage.go:538  [replicate,n1,s1,r4/1:/System/{NodeLive…-tsd}] generated preemptive snapshot 6a3c4a6a at index 39
I180820 14:29:55.986344 2188 storage/store_snapshot.go:655  [replicate,n1,s1,r4/1:/System/{NodeLive…-tsd}] streamed snapshot to (n2,s2):?: kv pairs: 37, log entries: 29, rate-limit: 2.0 MiB/sec, 25ms
I180820 14:29:55.992337 2242 storage/replica_raftstorage.go:782  [n2,s2,r4/?:{-}] applying preemptive snapshot at index 39 (id=6a3c4a6a, encoded size=62468, 1 rocksdb batches, 29 log entries)
I180820 14:29:56.031743 2242 storage/replica_raftstorage.go:788  [n2,s2,r4/?:/System/{NodeLive…-tsd}] applied preemptive snapshot in 39ms [clear=0ms batch=11ms entries=25ms commit=3ms]
I180820 14:29:56.034491 923 server/node.go:672  [n3] bootstrapped store [n3,s3]
I180820 14:29:56.035518 2129 storage/replica.go:3749  [n2,s2,r15/2:/Table/1{8-9}] proposing REMOVE_REPLICA((n1,s1):1): updated=[(n2,s2):2] next=3
I180820 14:29:56.061053 923 server/node.go:546  [n3] node=3: started with [] engine(s) and attributes []
I180820 14:29:56.067946 923 server/status/recorder.go:652  [n3] available memory from cgroups (8.0 EiB) exceeds system memory 16 GiB, using system memory
I180820 14:29:56.068099 923 server/server.go:1807  [n3] Could not start heap profiler worker due to: directory to store profiles could not be determined
I180820 14:29:56.083391 923 server/server.go:1538  [n3] starting https server at 127.0.0.1:41931 (use: 127.0.0.1:41931)
I180820 14:29:56.083517 923 server/server.go:1540  [n3] starting grpc/postgres server at 127.0.0.1:44605
I180820 14:29:56.083583 923 server/server.go:1541  [n3] advertising CockroachDB node at 127.0.0.1:44605
I180820 14:29:56.089106 2356 server/status/recorder.go:652  [n3,summaries] available memory from cgroups (8.0 EiB) exceeds system memory 16 GiB, using system memory
I180820 14:29:56.098170 923 server/server.go:1594  [n3] done ensuring all necessary migrations have run
I180820 14:29:56.098379 923 server/server.go:1597  [n3] serving sql connections
I180820 14:29:56.117706 2386 storage/replica_command.go:792  [replicate,n2,s2,r1/2:/{Min-System/}] change replicas (REMOVE_REPLICA (n1,s1):1): read existing descriptor r1:/{Min-System/} [(n1,s1):1, (n2,s2):2, next=3, gen=842353436808]
I180820 14:29:56.130865 2188 storage/replica_command.go:792  [replicate,n1,s1,r4/1:/System/{NodeLive…-tsd}] change replicas (ADD_REPLICA (n2,s2):2): read existing descriptor r4:/System/{NodeLivenessMax-tsd} [(n1,s1):1, next=2, gen=842369028872]
I180820 14:29:56.131023 1928 storage/store.go:3717  [n1,s1,r15/1:/Table/1{8-9}] added to replica GC queue (peer suggestion)
I180820 14:29:56.175330 2329 storage/store.go:2612  [replicaGC,n1,s1,r15/1:/Table/1{8-9}] removing replica
I180820 14:29:56.212842 2385 storage/store.go:2612  [replicaGC,n2,s2,r4/?:/System/{NodeLive…-tsd}] removing replica
I180820 14:29:56.214209 2385 storage/replica.go:880  [replicaGC,n2,s2,r4/?:/System/{NodeLive…-tsd}] removed 26 (19+7) keys in 1ms [clear=0ms commit=0ms]
I180820 14:29:56.239548 2399 rpc/nodedialer/nodedialer.go:92  [ct-client] connection to n1 established
I180820 14:29:56.259892 2329 storage/replica.go:880  [replicaGC,n1,s1,r15/1:/Table/1{8-9}] removed 7 (0+7) keys in 25ms [clear=0ms commit=25ms]
I180820 14:29:56.266446 2188 storage/replica.go:3749  [n1,s1,r4/1:/System/{NodeLive…-tsd}] proposing ADD_REPLICA((n2,s2):2): updated=[(n1,s1):1 (n2,s2):2] next=3
I180820 14:29:56.276356 2386 storage/replica.go:3749  [n2,s2,r1/2:/{Min-System/}] proposing REMOVE_REPLICA((n1,s1):1): updated=[(n2,s2):2] next=3
I180820 14:29:56.380625 2400 rpc/nodedialer/nodedialer.go:92  [ct-client] connection to n2 established
I180820 14:29:56.380896 1928 storage/store.go:3717  [n1,s1,r1/1:/{Min-System/}] added to replica GC queue (peer suggestion)
I180820 14:29:56.408338 2467 storage/replica_raftstorage.go:538  [raftsnapshot,n1,s1,r4/1:/System/{NodeLive…-tsd}] generated Raft snapshot ee0ded21 at index 42
I180820 14:29:56.427774 2457 storage/store.go:2612  [replicaGC,n1,s1,r1/1:/{Min-System/}] removing replica
I180820 14:29:56.436783 2457 storage/replica.go:880  [replicaGC,n1,s1,r1/1:/{Min-System/}] removed 31 (23+8) keys in 1ms [clear=1ms commit=0ms]
I180820 14:29:56.437987 2325 sql/event_log.go:126  [n3] Event: "node_join", target: 3, info: {Descriptor:{NodeID:3 Address:{NetworkField:tcp AddressField:127.0.0.1:44605} Attrs: Locality: ServerVersion:2.0-11 BuildTag:v2.1.0-alpha.20180702-1794-gfbc2970 StartedAt:1534775395926519263 LocalityAddress:[]} ClusterID:86a4ec18-704c-48fd-baaf-3c74b0d02478 StartedAt:1534775395926519263 LastUp:1534775395926519263}
I180820 14:29:56.443576 2467 storage/store_snapshot.go:655  [raftsnapshot,n1,s1,r4/1:/System/{NodeLive…-tsd}] streamed snapshot to (n2,s2):2: kv pairs: 40, log entries: 32, rate-limit: 8.0 MiB/sec, 34ms
I180820 14:29:56.444333 2323 server/server_update.go:67  [n3] no need to upgrade, cluster already at the newest version
I180820 14:29:56.446920 2490 storage/replica_raftstorage.go:782  [n2,s2,r4/2:{-}] applying Raft snapshot at index 42 (id=ee0ded21, encoded size=100056, 1 rocksdb batches, 32 log entries)
I180820 14:29:56.462454 2490 storage/replica_raftstorage.go:788  [n2,s2,r4/2:/System/{NodeLive…-tsd}] applied Raft snapshot in 15ms [clear=0ms batch=0ms entries=14ms commit=0ms]
I180820 14:29:56.472724 2510 storage/replica_command.go:792  [replicate,n1,s1,r4/1:/System/{NodeLive…-tsd}] change replicas (REMOVE_REPLICA (n2,s2):2): read existing descriptor r4:/System/{NodeLivenessMax-tsd} [(n1,s1):1, (n2,s2):2, next=3, gen=842372734952]
I180820 14:29:56.556009 2510 storage/replica.go:3749  [n1,s1,r4/1:/System/{NodeLive…-tsd}] proposing REMOVE_REPLICA((n2,s2):2): updated=[(n1,s1):1] next=3
I180820 14:29:56.571978 2518 storage/store.go:2612  [replicaGC,n2,s2,r4/2:/System/{NodeLive…-tsd}] removing replica
I180820 14:29:56.573570 2518 storage/replica.go:880  [replicaGC,n2,s2,r4/2:/System/{NodeLive…-tsd}] removed 28 (20+8) keys in 1ms [clear=1ms commit=0ms]
I180820 14:29:56.600876 2546 storage/replica_raftstorage.go:538  [replicate,n1,s1,r12/1:/Table/1{5-6}] generated preemptive snapshot e7f18427 at index 19
I180820 14:29:56.692286 2511 sql/event_log.go:126  [n1,client=127.0.0.1:35826,user=root] Event: "create_database", target: 52, info: {DatabaseName:data Statement:CREATE DATABASE data User:root}
I180820 14:29:56.701186 2474 storage/replica_command.go:282  [split,n1,s1,r22/1:/{Table/51-Max}] initiating a split of this range at key /Table/52 [r23]
I180820 14:29:56.742006 2432 rpc/nodedialer/nodedialer.go:92  [ct-client] connection to n3 established
I180820 14:29:56.742783 2546 rpc/nodedialer/nodedialer.go:92  [replicate,n1,s1,r12/1:/Table/1{5-6}] connection to n3 established
I180820 14:29:56.755536 2546 storage/store_snapshot.go:655  [replicate,n1,s1,r12/1:/Table/1{5-6}] streamed snapshot to (n3,s3):?: kv pairs: 8, log entries: 9, rate-limit: 2.0 MiB/sec, 12ms
I180820 14:29:56.758054 2597 storage/replica_raftstorage.go:782  [n3,s3,r12/?:{-}] applying preemptive snapshot at index 19 (id=e7f18427, encoded size=2614, 1 rocksdb batches, 9 log entries)
I180820 14:29:56.760451 2597 storage/replica_raftstorage.go:788  [n3,s3,r12/?:/Table/1{5-6}] applied preemptive snapshot in 2ms [clear=0ms batch=0ms entries=1ms commit=0ms]
I180820 14:29:56.763483 2546 storage/replica_command.go:792  [replicate,n1,s1,r12/1:/Table/1{5-6}] change replicas (ADD_REPLICA (n3,s3):2): read existing descriptor r12:/Table/1{5-6} [(n1,s1):1, next=2, gen=842354893672]
I180820 14:29:56.818631 2497 rpc/nodedialer/nodedialer.go:92  [ct-client] connection to n3 established
I180820 14:29:56.829622 2546 storage/replica.go:3749  [n1,s1,r12/1:/Table/1{5-6}] proposing ADD_REPLICA((n3,s3):2): updated=[(n1,s1):1 (n3,s3):2] next=3
I180820 14:29:56.832593 2511 sql/event_log.go:126  [n1,client=127.0.0.1:35826,user=root] Event: "set_cluster_setting", target: 0, info: {SettingName:server.declined_reservation_timeout Value:0s User:root}
I180820 14:29:56.906527 2584 storage/replica_range_lease.go:554  [replicate,n1,s1,r12/1:/Table/1{5-6}] transferring lease to s3
I180820 14:29:56.925200 2584 storage/replica_range_lease.go:617  [replicate,n1,s1,r12/1:/Table/1{5-6}] done transferring lease to s3: <nil>
I180820 14:29:56.930280 2511 sql/event_log.go:126  [n1,client=127.0.0.1:35826,user=root] Event: "set_cluster_setting", target: 0, info: {SettingName:server.failed_reservation_timeout Value:0s User:root}
I180820 14:29:56.944865 2659 storage/replica_raftstorage.go:538  [replicate,n1,s1,r10/1:/Table/1{3-4}] generated preemptive snapshot 9986c5eb at index 62
I180820 14:29:56.950878 2228 storage/replica_proposal.go:214  [n3,s3,r12/2:/Table/1{5-6}] new range lease repl=(n3,s3):2 seq=3 start=1534775396.907716977,0 epo=1 pro=1534775396.907730761,0 following repl=(n1,s1):1 seq=2 start=1534775392.574506849,0 exp=1534775401.611051857,0 pro=1534775392.611086983,0
I180820 14:29:56.956181 2659 storage/store_snapshot.go:655  [replicate,n1,s1,r10/1:/Table/1{3-4}] streamed snapshot to (n3,s3):?: kv pairs: 185, log entries: 52, rate-limit: 2.0 MiB/sec, 11ms
--- FAIL: testrace/TestInitialPartitioning: TestInitialPartitioning/COLLATEDSTRING (46.690s)
soon.go:49: condition failed to evaluate within 45s: expected to scan on n1: SELECT count(*) FROM "COLLATEDSTRING" WHERE a > e'\U00053748\U000829E4\U000731D2\U000E0C51\U00074D21\U0007C7EC\U000A9BF6\U0006D5F7' COLLATE da
    [n2,s2,r7/5:/Table/{SystemCon…-11}] read completed
    full trace:
       === SPAN START: session recording ===
      [n1,client=127.0.0.1:35826,user=root] [NoTxn pos:3906] executing ExecStmt: SELECT count(*) FROM "COLLATEDSTRING" WHERE a > e'\U00053748\U000829E4\U000731D2\U000E0C51\U00074D21\U0007C7EC\U000A9BF6\U0006D5F7' COLLATE da
      [n1,client=127.0.0.1:35826,user=root] executing: SELECT count(*) FROM "COLLATEDSTRING" WHERE a > e'\U00053748\U000829E4\U000731D2\U000E0C51\U00074D21\U0007C7EC\U000A9BF6\U0006D5F7' COLLATE da in state: NoTxn
       === SPAN START: sql txn ===
      [n1,client=127.0.0.1:35826,user=root] [Open pos:3906] executing ExecStmt: SELECT count(*) FROM "COLLATEDSTRING" WHERE a > e'\U00053748\U000829E4\U000731D2\U000E0C51\U00074D21\U0007C7EC\U000A9BF6\U0006D5F7' COLLATE da
      [n1,client=127.0.0.1:35826,user=root] executing: SELECT count(*) FROM "COLLATEDSTRING" WHERE a > e'\U00053748\U000829E4\U000731D2\U000E0C51\U00074D21\U0007C7EC\U000A9BF6\U0006D5F7' COLLATE da in state: Open
      [n1,client=127.0.0.1:35826,user=root] planning starts: SELECT
      [n1,client=127.0.0.1:35826,user=root] generating optimizer plan
       === SPAN START: txn coordinator send ===
       === SPAN START: dist sender send ===
      [client=127.0.0.1:35826,user=root,txn=62d27df7,n1] querying next range at /Table/2/1/52/"COLLATEDSTRING"/3/1
      [client=127.0.0.1:35826,user=root,txn=62d27df7,n1] r7: sending batch 1 Get to (n2,s2):5
      [client=127.0.0.1:35826,user=root,txn=62d27df7,n1] sending request to 127.0.0.1:33797
       === SPAN START: /cockroach.roachpb.Internal/Batch ===
       === SPAN START: /cockroach.roachpb.Internal/Batch ===
      [n2] 1 Get
      [n2,s2] executing 1 requests
      [n2,s2,r7/5:/Table/{SystemCon…-11}] read-only path
      [n2,s2,r7/5:/Table/{SystemCon…-11}] command queue
      [n2,s2,r7/5:/Table/{SystemCon…-11}] waiting for read lock
      [n2,s2,r7/5:/Table/{SystemCon…-11}] read completed
       === SPAN START: txn coordinator send ===
      [n1,client=127.0.0.1:35826,user=root] client.Txn did AutoCommit. err: <nil>

      [n1,client=127.0.0.1:35826,user=root] added table 'data.public."COLLATEDSTRING"' to table collection
      [n1,client=127.0.0.1:35826,user=root] optimizer plan failed: not yet implemented: scalar expression: *tree.CollateExpr
      [n1,client=127.0.0.1:35826,user=root] optimizer falls back on heuristic planner
      [n1,client=127.0.0.1:35826,user=root] query is correlated: false
      [n1,client=127.0.0.1:35826,user=root] heuristic planner starts
      [n1,client=127.0.0.1:35826,user=root] found table in table collection for table 'data.public."COLLATEDSTRING"'
      [n1,client=127.0.0.1:35826,user=root] heuristic planner optimizes plan
      [n1,client=127.0.0.1:35826,user=root] heuristic planner optimizes subqueries
      [n1,client=127.0.0.1:35826,user=root] planning ends
      [n1,client=127.0.0.1:35826,user=root] checking distributability
      [n1,client=127.0.0.1:35826,user=root] distributable plan: true
      [n1,client=127.0.0.1:35826,user=root] execution starts: distributed
       === SPAN START: consuming rows ===
      [n1,client=127.0.0.1:35826,user=root] creating DistSQL plan with isLocal=false
      [n1,client=127.0.0.1:35826,user=root] querying next range at /Table/82/1/"\x8a7H\x8d)\xe4\x8c1ғ\fQ\x8cM!\x8c\xc7쏛\xf6\x8b\xd5\xf7\x00\x00\x00 \x00 \x00 \x00 \x00 \x00 \x00 \x00 \x00\x00\x02\x02\x02\x02\x02\x02\x02\x02"/PrefixEnd
      [n1,client=127.0.0.1:35826,user=root] running DistSQL plan
       === SPAN START: flow ===
      [n1,client=127.0.0.1:35826,user=root] starting (1 processors, 0 startables)
       === SPAN START: table reader ===
    cockroach.processorid: 0
    cockroach.stat.tablereader.stalltime: 2µs
    cockroach.stat.tablereader.input.rows: 0
      [n1,client=127.0.0.1:35826,user=root] Scan /Table/82/{1/"\x8a7H\x8d)\xe4\x8c1ғ\fQ\x8cM!\x8c\xc7쏛\xf6\x8b\xd5\xf7\x00\x00\x00 \x00 \x00 \x00 \x00 \x00 \x00 \x00 \x00\x00\x02\x02\x02\x02\x02\x02\x02\x02"/PrefixEnd-2}
       === SPAN START: txn coordinator send ===
       === SPAN START: dist sender send ===
      [client=127.0.0.1:35826,user=root,txn=7a956c6c,n1] querying next range at /Table/82/1/"\x8a7H\x8d)\xe4\x8c1ғ\fQ\x8cM!\x8c\xc7쏛\xf6\x8b\xd5\xf7\x00\x00\x00 \x00 \x00 \x00 \x00 \x00 \x00 \x00 \x00\x00\x02\x02\x02\x02\x02\x02\x02\x02"/PrefixEnd
      [client=127.0.0.1:35826,user=root,txn=7a956c6c,n1] r234: sending batch 1 Scan to (n1,s1):2
      [client=127.0.0.1:35826,user=root,txn=7a956c6c,n1] sending request to local server
       === SPAN START: /cockroach.roachpb.Internal/Batch ===
      [n1] 1 Scan
      [n1,s1] executing 1 requests
      [n1,s1,r234/2:/Table/82/{1/"\x8a…-2}] read-only path
      [n1,s1,r234/2:/Table/82/{1/"\x8a…-2}] read has no clock uncertainty
      [n1,s1,r234/2:/Table/82/{1/"\x8a…-2}] command queue
      [n1,s1,r234/2:/Table/82/{1/"\x8a…-2}] waiting for read lock
      [n1,s1,r234/2:/Table/82/{1/"\x8a…-2}] read completed
       === SPAN START: ordered aggregator ===
    cockroach.stat.aggregator.mem.max: 10 KiB
    cockroach.processorid: 1
    cockroach.stat.aggregator.input.rows: 0
    cockroach.stat.aggregator.stalltime: 269µs
      [n1,client=127.0.0.1:35826,user=root] accumulation complete
      [n1,client=127.0.0.1:35826,user=root] exiting aggregator
      [n1,client=127.0.0.1:35826,user=root] execution ends
      [n1,client=127.0.0.1:35826,user=root] rows affected: 1
       === SPAN START: txn coordinator send ===
      [n1,client=127.0.0.1:35826,user=root] AutoCommit. err: <nil>
      [n1,client=127.0.0.1:35826,user=root] releasing 1 tables
      [n1,client=127.0.0.1:35826,user=root] [NoTxn pos:3907] executing ExecStmt: SET TRACING = off
      [n1,client=127.0.0.1:35826,user=root] executing: SET TRACING = off in state: NoTxn
    goroutine 32070 [running]:
    runtime/debug.Stack(0xa7a358200, 0xc422b2fda0, 0x3db39e0)
        /usr/local/go/src/runtime/debug/stack.go:24 +0xb5
    github.com/cockroachdb/cockroach/pkg/testutils.SucceedsSoon(0x3e150c0, 0xc4215a8c30, 0xc422b2fd10)
        /go/src/github.com/cockroachdb/cockroach/pkg/testutils/soon.go:50 +0x172
    github.com/cockroachdb/cockroach/pkg/ccl/partitionccl.TestInitialPartitioning.func1(0xc4215a8c30)
        /go/src/github.com/cockroachdb/cockroach/pkg/ccl/partitionccl/partition_test.go:1156 +0x258
    testing.tRunner(0xc4215a8c30, 0xc423be1320)
        /usr/local/go/src/testing/testing.go:777 +0x16e
    created by testing.(*T).Run
        /usr/local/go/src/testing/testing.go:824 +0x565
------- Stdout: -------
I180820 14:32:18.531002 2583 storage/store.go:3717  [n3,s3,r229/1:/Table/81/{1/36099…-2}] added to replica GC queue (peer suggestion)
I180820 14:32:18.575712 32054 storage/store.go:2612  [replicaGC,n3,s3,r229/1:/Table/81/{1/36099…-2}] removing replica
I180820 14:32:18.578297 32054 storage/replica.go:880  [replicaGC,n3,s3,r229/1:/Table/81/{1/36099…-2}] removed 7 (0+7) keys in 1ms [clear=0ms commit=0ms]
I180820 14:32:18.783061 2511 sql/event_log.go:126  [n1,client=127.0.0.1:35826,user=root] Event: "create_table", target: 82, info: {TableName:data.public."COLLATEDSTRING" Statement:CREATE TABLE "COLLATEDSTRING" (a STRING COLLATE da PRIMARY KEY) PARTITION BY LIST (a) (PARTITION p VALUES IN (e'\U00053748\U000829E4\U000731D2\U000E0C51\U00074D21\U0007C7EC\U000A9BF6\U0006D5F7' COLLATE da)) User:root}
I180820 14:32:18.831536 32071 storage/replica_command.go:282  [split,n3,s3,r227/1:/{Table/81/2-Max}] initiating a split of this range at key /Table/82 [r230]
I180820 14:32:18.987244 2275 storage/replica_proposal.go:214  [n3,s3,r227/1:/{Table/81/2-Max}] new range lease repl=(n3,s3):1 seq=3 start=1534775399.310220589,0 epo=1 pro=1534775399.310232913,0 following repl=(n3,s3):1 seq=3 start=1534775399.310220589,0 epo=1 pro=1534775399.310232913,0
I180820 14:32:19.334191 2511 sql/event_log.go:126  [n1,client=127.0.0.1:35826,user=root] Event: "set_zone_config", target: 82, info: {Target:data."COLLATEDSTRING"@primary Config:constraints: [+n1] User:root}
I180820 14:32:19.407753 32125 storage/replica_command.go:282  [split,n3,s3,r230/1:/{Table/82-Max}] initiating a split of this range at key /Table/82/1 [r231]
I180820 14:32:19.607889 2289 storage/replica_proposal.go:214  [n3,s3,r230/1:/{Table/82-Max}] new range lease repl=(n3,s3):1 seq=3 start=1534775399.310220589,0 epo=1 pro=1534775399.310232913,0 following repl=(n3,s3):1 seq=3 start=1534775399.310220589,0 epo=1 pro=1534775399.310232913,0
I180820 14:32:19.614272 32227 storage/replica_command.go:282  [split,n3,s3,r231/1:/{Table/82/1-Max}] initiating a split of this range at key /Table/82/2 [r232]
I180820 14:32:19.991101 2265 storage/replica_proposal.go:214  [n3,s3,r231/1:/{Table/82/1-Max}] new range lease repl=(n3,s3):1 seq=3 start=1534775399.310220589,0 epo=1 pro=1534775399.310232913,0 following repl=(n3,s3):1 seq=3 start=1534775399.310220589,0 epo=1 pro=1534775399.310232913,0
I180820 14:32:20.006456 2511 sql/event_log.go:126  [n1,client=127.0.0.1:35826,user=root] Event: "set_zone_config", target: 82, info: {Target:data."COLLATEDSTRING".p Config:constraints: [+n2] User:root}
I180820 14:32:20.007244 32070 ccl/partitionccl/partition_test.go:200  query: SELECT count(*) FROM "COLLATEDSTRING" WHERE a < e'\U00053748\U000829E4\U000731D2\U000E0C51\U00074D21\U0007C7EC\U000A9BF6\U0006D5F7' COLLATE da
I180820 14:32:20.150915 32192 storage/replica_command.go:282  [split,n3,s3,r231/1:/Table/82/{1-2}] initiating a split of this range at key /Table/82/1/"\x8a7H\x8d)\xe4\x8c1ғ\fQ\x8cM!\x8c\xc7쏛\xf6\x8b\xd5\xf7\x00\x00\x00 \x00 \x00 \x00 \x00 \x00 \x00 \x00 \x00\x00\x02\x02\x02\x02\x02\x02\x02\x02" [r233]
I180820 14:32:20.417263 2301 storage/replica_proposal.go:214  [n3,s3,r231/1:/Table/82/{1-2}] new range lease repl=(n3,s3):1 seq=3 start=1534775399.310220589,0 epo=1 pro=1534775399.310232913,0 following repl=(n3,s3):1 seq=3 start=1534775399.310220589,0 epo=1 pro=1534775399.310232913,0
I180820 14:32:20.439342 32288 storage/replica_command.go:282  [split,n3,s3,r233/1:/Table/82/{1/"\x8a…-2}] initiating a split of this range at key /Table/82/1/"\x8a7H\x8d)\xe4\x8c1ғ\fQ\x8cM!\x8c\xc7쏛\xf6\x8b\xd5\xf7\x00\x00\x00 \x00 \x00 \x00 \x00 \x00 \x00 \x00 \x00\x00\x02\x02\x02\x02\x02\x02\x02\x02"/PrefixEnd [r234]
I180820 14:32:20.448651 32331 storage/replica_raftstorage.go:538  [replicate,n3,s3,r231/1:/Table/82/1{-/"\x8a7…}] generated preemptive snapshot 22828eca at index 19
I180820 14:32:20.457970 32331 storage/store_snapshot.go:655  [replicate,n3,s3,r231/1:/Table/82/1{-/"\x8a7…}] streamed snapshot to (n1,s1):?: kv pairs: 9, log entries: 9, rate-limit: 2.0 MiB/sec, 8ms
I180820 14:32:20.469324 32357 storage/replica_raftstorage.go:782  [n1,s1,r231/?:{-}] applying preemptive snapshot at index 19 (id=22828eca, encoded size=4561, 1 rocksdb batches, 9 log entries)
I180820 14:32:20.474086 32357 storage/replica_raftstorage.go:788  [n1,s1,r231/?:/Table/82/1{-/"\x8a7…}] applied preemptive snapshot in 3ms [clear=0ms batch=0ms entries=2ms commit=0ms]
I180820 14:32:20.487649 32331 storage/replica_command.go:792  [replicate,n3,s3,r231/1:/Table/82/1{-/"\x8a7…}] change replicas (ADD_REPLICA (n1,s1):2): read existing descriptor r231:/Table/82/1{-/"\x8a7H\x8d)\xe4\x8c1ғ\fQ\x8cM!\x8c\xc7쏛\xf6\x8b\xd5\xf7\x00\x00\x00 \x00 \x00 \x00 \x00 \x00 \x00 \x00 \x00\x00\x02\x02\x02\x02\x02\x02\x02\x02"} [(n3,s3):1, next=2, gen=842382525808]
I180820 14:32:20.577433 32070 ccl/partitionccl/partition_test.go:200  query: SELECT count(*) FROM "COLLATEDSTRING" WHERE a > e'\U00053748\U000829E4\U000731D2\U000E0C51\U00074D21\U0007C7EC\U000A9BF6\U0006D5F7' COLLATE da
I180820 14:32:20.634490 32070 ccl/partitionccl/partition_test.go:200  query: SELECT count(*) FROM "COLLATEDSTRING" WHERE a = e'\U00053748\U000829E4\U000731D2\U000E0C51\U00074D21\U0007C7EC\U000A9BF6\U0006D5F7' COLLATE da
I180820 14:32:20.728852 32070 ccl/partitionccl/partition_test.go:200  query: SELECT count(*) FROM "COLLATEDSTRING" WHERE a < e'\U00053748\U000829E4\U000731D2\U000E0C51\U00074D21\U0007C7EC\U000A9BF6\U0006D5F7' COLLATE da
I180820 14:32:20.775812 2291 storage/replica_proposal.go:214  [n3,s3,r233/1:/Table/82/{1/"\x8a…-2}] new range lease repl=(n3,s3):1 seq=3 start=1534775399.310220589,0 epo=1 pro=1534775399.310232913,0 following repl=(n3,s3):1 seq=3 start=1534775399.310220589,0 epo=1 pro=1534775399.310232913,0
I180820 14:32:20.795420 32331 storage/replica.go:3749  [n3,s3,r231/1:/Table/82/1{-/"\x8a7…}] proposing ADD_REPLICA((n1,s1):2): updated=[(n3,s3):1 (n1,s1):2] next=3
I180820 14:32:20.843061 32412 storage/replica_range_lease.go:554  [replicate,n3,s3,r231/1:/Table/82/1{-/"\x8a7…}] transferring lease to s1
I180820 14:32:20.852630 32070 ccl/partitionccl/partition_test.go:200  query: SELECT count(*) FROM "COLLATEDSTRING" WHERE a > e'\U00053748\U000829E4\U000731D2\U000E0C51\U00074D21\U0007C7EC\U000A9BF6\U0006D5F7' COLLATE da
I180820 14:32:20.878446 32412 storage/replica_range_lease.go:617  [replicate,n3,s3,r231/1:/Table/82/1{-/"\x8a7…}] done transferring lease to s1: <nil>
I180820 14:32:20.881488 32454 storage/replica_raftstorage.go:538  [replicate,n3,s3,r233/1:/Table/82/1/"\x8a7H\x8d)\xe4\…] generated preemptive snapshot 757cc4d8 at index 16
I180820 14:32:20.883909 1004 storage/replica_proposal.go:214  [n1,s1,r231/2:/Table/82/1{-/"\x8a7…}] new range lease repl=(n1,s1):2 seq=4 start=1534775540.843315091,0 epo=1 pro=1534775540.843382199,0 following repl=(n3,s3):1 seq=3 start=1534775399.310220589,0 epo=1 pro=1534775399.310232913,0
I180820 14:32:20.895699 32454 storage/store_snapshot.go:655  [replicate,n3,s3,r233/1:/Table/82/1/"\x8a7H\x8d)\xe4\…] streamed snapshot to (n2,s2):?: kv pairs: 7, log entries: 6, rate-limit: 2.0 MiB/sec, 13ms
I180820 14:32:20.908698 32457 storage/replica_raftstorage.go:782  [n2,s2,r233/?:{-}] applying preemptive snapshot at index 16 (id=757cc4d8, encoded size=3832, 1 rocksdb batches, 6 log entries)
I180820 14:32:20.929461 32457 storage/replica_raftstorage.go:788  [n2,s2,r233/?:/Table/82/1/"\x8a7H\x8d)\xe4\…] applied preemptive snapshot in 19ms [clear=0ms batch=0ms entries=17ms commit=1ms]
I180820 14:32:20.931644 32430 storage/replica_command.go:792  [replicate,n1,s1,r231/2:/Table/82/1{-/"\x8a7…}] change replicas (REMOVE_REPLICA (n3,s3):1): read existing descriptor r231:/Table/82/1{-/"\x8a7H\x8d)\xe4\x8c1ғ\fQ\x8cM!\x8c\xc7쏛\xf6\x8b\xd5\xf7\x00\x00\x00 \x00 \x00 \x00 \x00 \x00 \x00 \x00 \x00\x00\x02\x02\x02\x02\x02\x02\x02\x02"} [(n3,s3):1, (n1,s1):2, next=3, gen=842365545360]
I180820 14:32:20.945852 32454 storage/replica_command.go:792  [replicate,n3,s3,r233/1:/Table/82/1/"\x8a7H\x8d)\xe4\…] change replicas (ADD_REPLICA (n2,s2):2): read existing descriptor r233:/Table/82/1/"\x8a7H\x8d)\xe4\x8c1ғ\fQ\x8cM!\x8c\xc7쏛\xf6\x8b\xd5\xf7\x00\x00\x00 \x00 \x00 \x00 \x00 \x00 \x00 \x00 \x00\x00\x02\x02\x02\x02\x02\x02\x02\x02"{-/PrefixEnd} [(n3,s3):1, next=2, gen=842397881056]
I180820 14:32:20.983751 32070 ccl/partitionccl/partition_test.go:200  query: SELECT count(*) FROM "COLLATEDSTRING" WHERE a < e'\U00053748\U000829E4\U000731D2\U000E0C51\U00074D21\U0007C7EC\U000A9BF6\U0006D5F7' COLLATE da
I180820 14:32:21.175756 32070 ccl/partitionccl/partition_test.go:200  query: SELECT count(*) FROM "COLLATEDSTRING" WHERE a < e'\U00053748\U000829E4\U000731D2\U000E0C51\U00074D21\U0007C7EC\U000A9BF6\U0006D5F7' COLLATE da
I180820 14:32:21.226472 32454 storage/replica.go:3749  [n3,s3,r233/1:/Table/82/1/"\x8a7H\x8d)\xe4\…] proposing ADD_REPLICA((n2,s2):2): updated=[(n3,s3):1 (n2,s2):2] next=3
I180820 14:32:21.231060 32430 storage/replica.go:3749  [n1,s1,r231/2:/Table/82/1{-/"\x8a7…}] proposing REMOVE_REPLICA((n3,s3):1): updated=[(n1,s1):2] next=3
I180820 14:32:21.268907 2583 storage/store.go:3717  [n3,s3,r231/1:/Table/82/1{-/"\x8a7…}] added to replica GC queue (peer suggestion)
I180820 14:32:21.292494 32548 storage/store.go:2612  [replicaGC,n3,s3,r231/1:/Table/82/1{-/"\x8a7…}] removing replica
I180820 14:32:21.295355 32548 storage/replica.go:880  [replicaGC,n3,s3,r231/1:/Table/82/1{-/"\x8a7…}] removed 7 (0+7) keys in 1ms [clear=0ms commit=0ms]
I180820 14:32:21.309765 32070 ccl/partitionccl/partition_test.go:200  query: SELECT count(*) FROM "COLLATEDSTRING" WHERE a = e'\U00053748\U000829E4\U000731D2\U000E0C51\U00074D21\U0007C7EC\U000A9BF6\U0006D5F7' COLLATE da
I180820 14:32:21.326752 32478 storage/replica_range_lease.go:554  [replicate,n3,s3,r233/1:/Table/82/1/"\x8a7H\x8d)\xe4\…] transferring lease to s2
I180820 14:32:21.341078 1821 storage/replica_proposal.go:214  [n2,s2,r233/2:/Table/82/1/"\x8a7H\x8d)\xe4\…] new range lease repl=(n2,s2):2 seq=4 start=1534775541.331406428,0 epo=1 pro=1534775541.331442072,0 following repl=(n3,s3):1 seq=3 start=1534775399.310220589,0 epo=1 pro=1534775399.310232913,0
I180820 14:32:21.341537 32478 storage/replica_range_lease.go:617  [replicate,n3,s3,r233/1:/Table/82/1/"\x8a7H\x8d)\xe4\…] done transferring lease to s2: <nil>
I180820 14:32:21.348657 32512 storage/replica_raftstorage.go:538  [replicate,n3,s3,r234/1:/Table/82/{1/"\x8a…-2}] generated preemptive snapshot a57cc299 at index 12
I180820 14:32:21.385909 32512 storage/store_snapshot.go:655  [replicate,n3,s3,r234/1:/Table/82/{1/"\x8a…-2}] streamed snapshot to (n1,s1):?: kv pairs: 6, log entries: 2, rate-limit: 2.0 MiB/sec, 36ms
I180820 14:32:21.423845 32527 storage/replica_raftstorage.go:782  [n1,s1,r234/?:{-}] applying preemptive snapshot at index 12 (id=a57cc299, encoded size=452, 1 rocksdb batches, 2 log entries)
I180820 14:32:21.426088 32527 storage/replica_raftstorage.go:788  [n1,s1,r234/?:/Table/82/{1/"\x8a…-2}] applied preemptive snapshot in 2ms [clear=0ms batch=0ms entries=0ms commit=1ms]
I180820 14:32:21.470898 32512 storage/replica_command.go:792  [replicate,n3,s3,r234/1:/Table/82/{1/"\x8a…-2}] change replicas (ADD_REPLICA (n1,s1):2): read existing descriptor r234:/Table/82/{1/"\x8a7H\x8d)\xe4\x8c1ғ\fQ\x8cM!\x8c\xc7쏛\xf6\x8b\xd5\xf7\x00\x00\x00 \x00 \x00 \x00 \x00 \x00 \x00 \x00 \x00\x00\x02\x02\x02\x02\x02\x02\x02\x02"/PrefixEnd-2} [(n3,s3):1, next=2, gen=0]
I180820 14:32:21.620310 32070 ccl/partitionccl/partition_test.go:200  query: SELECT count(*) FROM "COLLATEDSTRING" WHERE a IN (e'\U00053748\U000829E4\U000731D2\U000E0C51\U00074D21\U0007C7EC\U000A9BF6\U0006D5F7' COLLATE da)
I180820 14:32:21.654705 32512 storage/replica.go:3749  [n3,s3,r234/1:/Table/82/{1/"\x8a…-2}] proposing ADD_REPLICA((n1,s1):2): updated=[(n3,s3):1 (n1,s1):2] next=3
I180820 14:32:21.702355 32070 ccl/partitionccl/partition_test.go:200  query: SELECT count(*) FROM "COLLATEDSTRING" WHERE a > e'\U00053748\U000829E4\U000731D2\U000E0C51\U00074D21\U0007C7EC\U000A9BF6\U0006D5F7' COLLATE da
I180820 14:32:21.768201 32617 storage/replica_range_lease.go:554  [replicate,n3,s3,r234/1:/Table/82/{1/"\x8a…-2}] transferring lease to s1
I180820 14:32:21.799744 1171 storage/replica_proposal.go:214  [n1,s1,r234/2:/Table/82/{1/"\x8a…-2}] new range lease repl=(n1,s1):2 seq=4 start=1534775541.768367550,0 epo=1 pro=1534775541.768396352,0 following repl=(n3,s3):1 seq=3 start=1534775399.310220589,0 epo=1 pro=1534775399.310232913,0
I180820 14:32:21.801147 32070 ccl/partitionccl/partition_test.go:200  query: SELECT count(*) FROM "COLLATEDSTRING" WHERE a < e'\U00053748\U000829E4\U000731D2\U000E0C51\U00074D21\U0007C7EC\U000A9BF6\U0006D5F7' COLLATE da
I180820 14:32:21.839129 32617 storage/replica_range_lease.go:617  [replicate,n3,s3,r234/1:/Table/82/{1/"\x8a…-2}] done transferring lease to s1: <nil>
I180820 14:32:21.843000 32620 storage/replica_command.go:792  [replicate,n2,s2,r233/2:/Table/82/1/"\x8a7H\x8d)\xe4\…] change replicas (REMOVE_REPLICA (n3,s3):1): read existing descriptor r233:/Table/82/1/"\x8a7H\x8d)\xe4\x8c1ғ\fQ\x8cM!\x8c\xc7쏛\xf6\x8b\xd5\xf7\x00\x00\x00 \x00 \x00 \x00 \x00 \x00 \x00 \x00 \x00\x00\x02\x02\x02\x02\x02\x02\x02\x02"{-/PrefixEnd} [(n3,s3):1, (n2,s2):2, next=3, gen=842354703912]
I180820 14:32:21.891629 32070 ccl/partitionccl/partition_test.go:200  query: SELECT count(*) FROM "COLLATEDSTRING" WHERE a = e'\U00053748\U000829E4\U000731D2\U000E0C51\U00074D21\U0007C7EC\U000A9BF6\U0006D5F7' COLLATE da
I180820 14:32:21.963321 32070 ccl/partitionccl/partition_test.go:200  query: SELECT count(*) FROM "COLLATEDSTRING" WHERE a IN (e'\U00053748\U000829E4\U000731D2\U000E0C51\U00074D21\U0007C7EC\U000A9BF6\U0006D5F7' COLLATE da)
I180820 14:32:22.024444 32620 storage/replica.go:3749  [n2,s2,r233/2:/Table/82/1/"\x8a7H\x8d)\xe4\…] proposing REMOVE_REPLICA((n3,s3):1): updated=[(n2,s2):2] next=3
I180820 14:32:22.066740 5323 storage/store.go:3717  [n3,s3,r233/1:/Table/82/1/"\x8a7H\x8d)\xe4\…] added to replica GC queue (peer suggestion)
I180820 14:32:22.071482 32070 ccl/partitionccl/partition_test.go:200  query: SELECT count(*) FROM "COLLATEDSTRING" WHERE a > e'\U00053748\U000829E4\U000731D2\U000E0C51\U00074D21\U0007C7EC\U000A9BF6\U0006D5F7' COLLATE da
I180820 14:32:22.089771 32739 storage/store.go:2612  [replicaGC,n3,s3,r233/1:/Table/82/1/"\x8a7H\x8d)\xe4\…] removing replica
I180820 14:32:22.095465 32739 storage/replica.go:880  [replicaGC,n3,s3,r233/1:/Table/82/1/"\x8a7H\x8d)\xe4\…] removed 7 (0+7) keys in 1ms [clear=0ms commit=0ms]
I180820 14:32:22.103454 5328 storage/store.go:3736  [n2,s2,r233/2:/Table/82/1/"\x8a7H\x8d)\xe4\…] added to replica GC queue (contacted deleted peer)
I180820 14:32:22.321183 32070 ccl/partitionccl/partition_test.go:200  query: SELECT count(*) FROM "COLLATEDSTRING" WHERE a < e'\U00053748\U000829E4\U000731D2\U000E0C51\U00074D21\U0007C7EC\U000A9BF6\U0006D5F7' COLLATE da
I180820 14:32:22.527291 32070 ccl/partitionccl/partition_test.go:200  query: SELECT count(*) FROM "COLLATEDSTRING" WHERE a = e'\U00053748\U000829E4\U000731D2\U000E0C51\U00074D21\U0007C7EC\U000A9BF6\U0006D5F7' COLLATE da
I180820 14:32:22.631266 1237 server/status/runtime.go:433  [n1] runtime stats: 1023 MiB RSS, 651 goroutines, 38 MiB/23 MiB/86 MiB GO alloc/idle/total, 115 MiB/156 MiB CGO alloc/total, 2845.91cgo/sec, 1.61/0.09 %(u/s)time, 0.03 %gc (12x)
I180820 14:32:22.708533 32070 ccl/partitionccl/partition_test.go:200  query: SELECT count(*) FROM "COLLATEDSTRING" WHERE a IN (e'\U00053748\U000829E4\U000731D2\U000E0C51\U00074D21\U0007C7EC\U000A9BF6\U0006D5F7' COLLATE da)
I180820 14:32:22.855863 32070 ccl/partitionccl/partition_test.go:200  query: SELECT count(*) FROM "COLLATEDSTRING" WHERE a > e'\U00053748\U000829E4\U000731D2\U000E0C51\U00074D21\U0007C7EC\U000A9BF6\U0006D5F7' COLLATE da
I180820 14:32:22.861536 32825 storage/replica_command.go:792  [replicate,n1,s1,r234/2:/Table/82/{1/"\x8a…-2}] change replicas (REMOVE_REPLICA (n3,s3):1): read existing descriptor r234:/Table/82/{1/"\x8a7H\x8d)\xe4\x8c1ғ\fQ\x8cM!\x8c\xc7쏛\xf6\x8b\xd5\xf7\x00\x00\x00 \x00 \x00 \x00 \x00 \x00 \x00 \x00 \x00\x00\x02\x02\x02\x02\x02\x02\x02\x02"/PrefixEnd-2} [(n3,s3):1, (n1,s1):2, next=3, gen=0]
I180820 14:32:23.007410 32070 ccl/partitionccl/partition_test.go:1156  SucceedsSoon: expected to scan on n1: SELECT count(*) FROM "COLLATEDSTRING" WHERE a > e'\U00053748\U000829E4\U000731D2\U000E0C51\U00074D21\U0007C7EC\U000A9BF6\U0006D5F7' COLLATE da
[n2,s2,r7/5:/Table/{SystemCon…-11}] read completed
full trace:
   === SPAN START: session recording ===
  [n1,client=127.0.0.1:35826,user=root] [NoTxn pos:3516] executing ExecStmt: SELECT count(*) FROM "COLLATEDSTRING" WHERE a > e'\U00053748\U000829E4\U000731D2\U000E0C51\U00074D21\U0007C7EC\U000A9BF6\U0006D5F7' COLLATE da
  [n1,client=127.0.0.1:35826,user=root] executing: SELECT count(*) FROM "COLLATEDSTRING" WHERE a > e'\U00053748\U000829E4\U000731D2\U000E0C51\U00074D21\U0007C7EC\U000A9BF6\U0006D5F7' COLLATE da in state: NoTxn
   === SPAN START: sql txn ===
  [n1,client=127.0.0.1:35826,user=root] [Open pos:3516] executing ExecStmt: SELECT count(*) FROM "COLLATEDSTRING" WHERE a > e'\U00053748\U000829E4\U000731D2\U000E0C51\U00074D21\U0007C7EC\U000A9BF6\U0006D5F7' COLLATE da
  [n1,client=127.0.0.1:35826,user=root] executing: SELECT count(*) FROM "COLLATEDSTRING" WHERE a > e'\U00053748\U000829E4\U000731D2\U000E0C51\U00074D21\U0007C7EC\U000A9BF6\U0006D5F7' COLLATE da in state: Open
  [n1,client=127.0.0.1:35826,user=root] planning starts: SELECT
  [n1,client=127.0.0.1:35826,user=root] generating optimizer plan
   === SPAN START: txn coordinator send ===
   === SPAN START: dist sender send ===
  [client=127.0.0.1:35826,user=root,txn=26d30d29,n1] querying next range at /Table/2/1/52/"COLLATEDSTRING"/3/1
  [client=127.0.0.1:35826,user=root,txn=26d30d29,n1] r7: sending batch 1 Get to (n2,s2):5
  [client=127.0.0.1:35826,user=root,txn=26d30d29,n1] sending request to 127.0.0.1:33797
   === SPAN START: /cockroach.roachpb.Internal/Batch ===
   === SPAN START: /cockroach.roachpb.Internal/Batch ===
  [n2] 1 Get
  [n2,s2] executing 1 requests
  [n2,s2,r7/5:/Table/{SystemCon…-11}] read-only path
  [n2,s2,r7/5:/Table/{SystemCon…-11}] command queue
  [n2,s2,r7/5:/Table/{SystemCon…-11}] waiting for read lock
  [n2,s2,r7/5:/Table/{SystemCon…-11}] read completed
   === SPAN START: txn coordinator send ===
  [n1,client=127.0.0.1:35826,user=root] client.Txn did AutoCommit. err: <nil>

  [n1,client=127.0.0.1:35826,user=root] added table 'data.public."COLLATEDSTRING"' to table collection
  [n1,client=127.0.0.1:35826,user=root] optimizer plan failed: not yet implemented: scalar expression: *tree.CollateExpr
  [n1,client=127.0.0.1:35826,user=root] optimizer falls back on heuristic planner
  [n1,client=127.0.0.1:35826,user=root] query is correlated: false
  [n1,client=127.0.0.1:35826,user=root] heuristic planner starts
  [n1,client=127.0.0.1:35826,user=root] found table in table collection for table 'data.public."COLLATEDSTRING"'
  [n1,client=127.0.0.1:35826,user=root] heuristic planner optimizes plan
  [n1,client=127.0.0.1:35826,user=root] heuristic planner optimizes subqueries
  [n1,client=127.0.0.1:35826,user=root] planning ends
  [n1,client=127.0.0.1:35826,user=root] checking distributability
  [n1,client=127.0.0.1:35826,user=root] distributable plan: true
  [n1,client=127.0.0.1:35826,user=root] execution starts: distributed
   === SPAN START: consuming rows ===
  [n1,client=127.0.0.1:35826,user=root] creating DistSQL plan with isLocal=false
  [n1,client=127.0.0.1:35826,user=root] querying next range at /Table/82/1/"\x8a7H\x8d)\xe4\x8c1ғ\fQ\x8cM!\x8c\xc7쏛\xf6\x8b\xd5\xf7\x00\x00\x00 \x00 \x00 \x00 \x00 \x00 \x00 \x00 \x00\x00\x02\x02\x02\x02\x02\x02\x02\x02"/PrefixEnd
  [n1,client=127.0.0.1:35826,user=root] running DistSQL plan
   === SPAN START: flow ===
  [n1,client=127.0.0.1:35826,user=root] starting (1 processors, 0 startables)
   === SPAN START: table reader ===
cockroach.stat.tablereader.input.rows: 0
cockroach.stat.tablereader.stalltime: 2µs
cockroach.processorid: 0
  [n1,client=127.0.0.1:35826,user=root] Scan /Table/82/{1/"\x8a7H\x8d)\xe4\x8c1ғ\fQ\x8cM!\x8c\xc7쏛\xf6\x8b\xd5\xf7\x00\x00\x00 \x00 \x00 \x00 \x00 \x00 \x00 \x00 \x00\x00\x02\x02\x02\x02\x02\x02\x02\x02"/PrefixEnd-2}
   === SPAN START: txn coordinator send ===
   === SPAN START: dist sender send ===
  [client=127.0.0.1:35826,user=root,txn=14579fc3,n1] querying next range at /Table/82/1/"\x8a7H\x8d)\xe4\x8c1ғ\fQ\x8cM!\x8c\xc7쏛\xf6\x8b\xd5\xf7\x00\x00\x00 \x00 \x00 \x00 \x00 \x00 \x00 \x00 \x00\x00\x02\x02\x02\x02\x02\x02\x02\x02"/PrefixEnd
  [client=127.0.0.1:35826,user=root,txn=14579fc3,n1] r234: sending batch 1 Scan to (n1,s1):2
  [client=127.0.0.1:35826,user=root,txn=14579fc3,n1] sending request to local server
   === SPAN START: /cockroach.roachpb.Internal/Batch ===
  [n1] 1 Scan
  [n1,s1] executing 1 requests
  [n1,s1,r234/2:/Table/82/{1/"\x8a…-2}] read-only path
  [n1,s1,r234/2:/Table/82/{1/"\x8a…-2}] read has no clock uncertainty
  [n1,s1,r234/2:/Table/82/{1/"\x8a…-2}] command queue
  [n1,s1,r234/2:/Table/82/{1/"\x8a…-2}] waiting for 1 overlapping requests
  [n1,s1,r234/2:/Table/82/{1/"\x8a…-2}] waited 25.399038ms for overlapping requests
  [n1,s1,r234/2:/Table/82/{1/"\x8a…-2}] waiting for read lock
  [n1,s1,r234/2:/Table/82/{1/"\x8a…-2}] read completed
   === SPAN START: ordered aggregator ===
cockroach.stat.aggregator.mem.max: 10 KiB
cockroach.processorid: 1
cockroach.stat.aggregator.input.rows: 0
cockroach.stat.aggregator.stalltime: 302µs
  [n1,client=127.0.0.1:35826,user=root] accumulation complete
  [n1,client=127.0.0.1:35826,user=root] exiting aggregator
  [n1,client=127.0.0.1:35826,user=root] exe

Please assign, take a look and update the issue accordingly.

a-robinson commented 6 years ago

Dupe of https://github.com/cockroachdb/cockroach/issues/28789