cockroachdb / cockroach

CockroachDB — the cloud native, distributed SQL database designed for high availability, effortless scale, and control over data placement.
https://www.cockroachlabs.com
Other
30.19k stars 3.82k forks source link

teamcity: failed tests on master: testrace/TestInitialPartitioning #28782

Closed cockroach-teamcity closed 6 years ago

cockroach-teamcity commented 6 years ago

The following tests appear to have failed:

#845338:

--- FAIL: testrace/TestInitialPartitioning (227.000s)

------- Stdout: -------
W180818 00:06:09.460444 982 server/status/runtime.go:294  [n?] Could not parse build timestamp: parsing time "" as "2006/01/02 15:04:05": cannot parse "" as "2006"
I180818 00:06:09.508635 982 server/server.go:829  [n?] monitoring forward clock jumps based on server.clock.forward_jump_check_enabled
I180818 00:06:09.509177 982 base/addr_validation.go:260  [n?] server certificate addresses: IP=127.0.0.1,::1; DNS=localhost,*.local; CN=node
I180818 00:06:09.509269 982 base/addr_validation.go:300  [n?] web UI certificate addresses: IP=127.0.0.1,::1; DNS=localhost,*.local; CN=node
I180818 00:06:09.514454 982 server/config.go:496  [n?] 1 storage engine initialized
I180818 00:06:09.515373 982 server/config.go:499  [n?] RocksDB cache size: 128 MiB
I180818 00:06:09.515473 982 server/config.go:499  [n?] store 0: in-memory, size 0 B
I180818 00:06:09.586702 982 server/node.go:373  [n?] **** cluster b2bbeb3f-bde3-461c-8474-c9b0c8255863 has been created
I180818 00:06:09.586827 982 server/server.go:1400  [n?] **** add additional nodes by specifying --join=127.0.0.1:42519
I180818 00:06:09.587725 982 gossip/gossip.go:383  [n1] NodeDescriptor set to node_id:1 address:<network_field:"tcp" address_field:"127.0.0.1:42519" > attrs:<> locality:<> ServerVersion:<major_val:2 minor_val:0 patch:0 unstable:11 > build_tag:"v2.1.0-alpha.20180702-1743-ge1e5a21" started_at:1534550769587421706 
I180818 00:06:09.607353 982 storage/store.go:1509  [n1,s1] [n1,s1]: failed initial metrics computation: [n1,s1]: system config not yet available
I180818 00:06:09.608117 982 server/node.go:476  [n1] initialized store [n1,s1]: disk (capacity=512 MiB, available=512 MiB, used=0 B, logicalBytes=6.9 KiB), ranges=1, leases=0, queries=0.00, writes=0.00, bytesPerReplica={p10=7103.00 p25=7103.00 p50=7103.00 p75=7103.00 p90=7103.00 pMax=7103.00}, writesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}
I180818 00:06:09.608422 982 storage/stores.go:242  [n1] read 0 node addresses from persistent storage
I180818 00:06:09.608830 982 server/node.go:697  [n1] connecting to gossip network to verify cluster ID...
I180818 00:06:09.608975 982 server/node.go:722  [n1] node connected via gossip and verified as part of cluster "b2bbeb3f-bde3-461c-8474-c9b0c8255863"
I180818 00:06:09.609431 982 server/node.go:546  [n1] node=1: started with [n1=<in-mem>] engine(s) and attributes []
I180818 00:06:09.610575 982 server/status/recorder.go:652  [n1] available memory from cgroups (8.0 EiB) exceeds system memory 16 GiB, using system memory
I180818 00:06:09.610681 982 server/server.go:1806  [n1] Could not start heap profiler worker due to: directory to store profiles could not be determined
I180818 00:06:09.612728 982 server/server.go:1537  [n1] starting https server at 127.0.0.1:40787 (use: 127.0.0.1:40787)
I180818 00:06:09.612820 982 server/server.go:1539  [n1] starting grpc/postgres server at 127.0.0.1:42519
I180818 00:06:09.612880 982 server/server.go:1540  [n1] advertising CockroachDB node at 127.0.0.1:42519
W180818 00:06:09.613135 982 jobs/registry.go:308  [n1] unable to get node liveness: node not in the liveness table
I180818 00:06:09.614678 1254 server/status/recorder.go:652  [n1,summaries] available memory from cgroups (8.0 EiB) exceeds system memory 16 GiB, using system memory
I180818 00:06:09.649957 1269 storage/replica_command.go:282  [split,n1,s1,r1/1:/M{in-ax}] initiating a split of this range at key /System/"" [r2]
I180818 00:06:10.061613 1277 storage/replica_command.go:282  [split,n1,s1,r2/1:/{System/-Max}] initiating a split of this range at key /System/NodeLiveness [r3]
I180818 00:06:10.086464 1237 sql/event_log.go:126  [n1,intExec=optInToDiagnosticsStatReporting] Event: "set_cluster_setting", target: 0, info: {SettingName:diagnostics.reporting.enabled Value:true User:root}
I180818 00:06:10.097146 1145 rpc/nodedialer/nodedialer.go:92  [consistencyChecker,n1,s1,r1/1:/{Min-System/}] connection to n1 established
I180818 00:06:10.215678 1241 storage/replica_command.go:282  [split,n1,s1,r3/1:/{System/NodeL…-Max}] initiating a split of this range at key /System/NodeLivenessMax [r4]
I180818 00:06:10.322642 1348 storage/replica_command.go:282  [split,n1,s1,r4/1:/{System/NodeL…-Max}] initiating a split of this range at key /System/tsd [r5]
W180818 00:06:10.336309 1244 storage/intent_resolver.go:668  [n1,s1] failed to push during intent resolution: failed to push "sql txn" id=4649af89 key=/Table/SystemConfigSpan/Start rw=true pri=0.01996546 iso=SERIALIZABLE stat=PENDING epo=0 ts=1534550770.199173658,0 orig=1534550770.199173658,0 max=1534550770.199173658,0 wto=false rop=false seq=12
I180818 00:06:10.372401 1316 sql/event_log.go:126  [n1,intExec=set-setting] Event: "set_cluster_setting", target: 0, info: {SettingName:version Value:$1 User:root}
I180818 00:06:10.411799 1247 storage/replica_command.go:282  [split,n1,s1,r5/1:/{System/tsd-Max}] initiating a split of this range at key /System/"tse" [r6]
I180818 00:06:10.472149 1320 sql/event_log.go:126  [n1,intExec=disableNetTrace] Event: "set_cluster_setting", target: 0, info: {SettingName:trace.debug.enable Value:false User:root}
I180818 00:06:10.522425 1344 storage/replica_command.go:282  [split,n1,s1,r6/1:/{System/tse-Max}] initiating a split of this range at key /Table/SystemConfigSpan/Start [r7]
I180818 00:06:10.606405 1379 storage/replica_command.go:282  [split,n1,s1,r7/1:/{Table/System…-Max}] initiating a split of this range at key /Table/11 [r8]
I180818 00:06:10.704013 1296 storage/replica_command.go:282  [split,n1,s1,r8/1:/{Table/11-Max}] initiating a split of this range at key /Table/12 [r9]
I180818 00:06:10.781003 1328 storage/replica_command.go:282  [split,n1,s1,r9/1:/{Table/12-Max}] initiating a split of this range at key /Table/13 [r10]
I180818 00:06:10.831786 1430 storage/replica_command.go:282  [split,n1,s1,r10/1:/{Table/13-Max}] initiating a split of this range at key /Table/14 [r11]
I180818 00:06:10.858659 1361 sql/event_log.go:126  [n1,intExec=initializeClusterSecret] Event: "set_cluster_setting", target: 0, info: {SettingName:cluster.secret Value:gen_random_uuid()::STRING User:root}
I180818 00:06:10.945864 1479 storage/replica_command.go:282  [split,n1,s1,r11/1:/{Table/14-Max}] initiating a split of this range at key /Table/15 [r12]
I180818 00:06:10.987456 1406 sql/event_log.go:126  [n1,intExec=create-default-db] Event: "create_database", target: 50, info: {DatabaseName:defaultdb Statement:CREATE DATABASE IF NOT EXISTS defaultdb User:root}
I180818 00:06:11.012172 1508 storage/replica_command.go:282  [split,n1,s1,r12/1:/{Table/15-Max}] initiating a split of this range at key /Table/16 [r13]
I180818 00:06:11.050738 1373 sql/event_log.go:126  [n1,intExec=create-default-db] Event: "create_database", target: 51, info: {DatabaseName:postgres Statement:CREATE DATABASE IF NOT EXISTS postgres User:root}
I180818 00:06:11.101909 1518 storage/replica_command.go:282  [split,n1,s1,r13/1:/{Table/16-Max}] initiating a split of this range at key /Table/17 [r14]
I180818 00:06:11.115144 982 server/server.go:1593  [n1] done ensuring all necessary migrations have run
I180818 00:06:11.115279 982 server/server.go:1596  [n1] serving sql connections
I180818 00:06:11.152421 1520 storage/replica_command.go:282  [split,n1,s1,r14/1:/{Table/17-Max}] initiating a split of this range at key /Table/18 [r15]
I180818 00:06:11.155890 1417 server/server_update.go:67  [n1] no need to upgrade, cluster already at the newest version
I180818 00:06:11.188903 1419 sql/event_log.go:126  [n1] Event: "node_join", target: 1, info: {Descriptor:{NodeID:1 Address:{NetworkField:tcp AddressField:127.0.0.1:42519} Attrs: Locality: ServerVersion:2.0-11 BuildTag:v2.1.0-alpha.20180702-1743-ge1e5a21 StartedAt:1534550769587421706 LocalityAddress:[]} ClusterID:b2bbeb3f-bde3-461c-8474-c9b0c8255863 StartedAt:1534550769587421706 LastUp:1534550769587421706}
I180818 00:06:11.242817 1499 storage/replica_command.go:282  [split,n1,s1,r15/1:/{Table/18-Max}] initiating a split of this range at key /Table/19 [r16]
I180818 00:06:11.359192 1558 storage/replica_command.go:282  [split,n1,s1,r16/1:/{Table/19-Max}] initiating a split of this range at key /Table/20 [r17]
I180818 00:06:11.403354 1603 storage/replica_command.go:282  [split,n1,s1,r17/1:/{Table/20-Max}] initiating a split of this range at key /Table/21 [r18]
I180818 00:06:11.458738 1608 storage/replica_command.go:282  [split,n1,s1,r18/1:/{Table/21-Max}] initiating a split of this range at key /Table/22 [r19]
I180818 00:06:11.517047 1548 storage/replica_command.go:282  [split,n1,s1,r19/1:/{Table/22-Max}] initiating a split of this range at key /Table/23 [r20]
I180818 00:06:11.571062 1634 storage/replica_command.go:282  [split,n1,s1,r20/1:/{Table/23-Max}] initiating a split of this range at key /Table/50 [r21]
I180818 00:06:11.623019 1624 storage/replica_command.go:282  [split,n1,s1,r21/1:/{Table/50-Max}] initiating a split of this range at key /Table/51 [r22]
W180818 00:06:12.060925 982 server/status/runtime.go:294  [n?] Could not parse build timestamp: parsing time "" as "2006/01/02 15:04:05": cannot parse "" as "2006"
I180818 00:06:12.162435 982 server/server.go:829  [n?] monitoring forward clock jumps based on server.clock.forward_jump_check_enabled
I180818 00:06:12.162993 982 base/addr_validation.go:260  [n?] server certificate addresses: IP=127.0.0.1,::1; DNS=localhost,*.local; CN=node
I180818 00:06:12.163085 982 base/addr_validation.go:300  [n?] web UI certificate addresses: IP=127.0.0.1,::1; DNS=localhost,*.local; CN=node
I180818 00:06:12.175096 982 server/config.go:496  [n?] 1 storage engine initialized
I180818 00:06:12.175240 982 server/config.go:499  [n?] RocksDB cache size: 128 MiB
I180818 00:06:12.175334 982 server/config.go:499  [n?] store 0: in-memory, size 0 B
W180818 00:06:12.175735 982 gossip/gossip.go:1351  [n?] no incoming or outgoing connections
I180818 00:06:12.195602 982 server/server.go:1402  [n?] no stores bootstrapped and --join flag specified, awaiting init command.
I180818 00:06:12.273971 1718 gossip/client.go:129  [n?] started gossip client to 127.0.0.1:42519
I180818 00:06:12.278163 1713 gossip/server.go:219  [n1] received initial cluster-verification connection from {tcp 127.0.0.1:45793}
I180818 00:06:12.297361 982 server/node.go:697  [n?] connecting to gossip network to verify cluster ID...
I180818 00:06:12.297630 982 server/node.go:722  [n?] node connected via gossip and verified as part of cluster "b2bbeb3f-bde3-461c-8474-c9b0c8255863"
I180818 00:06:12.300016 1727 kv/dist_sender.go:345  [n?] unable to determine this node's attributes for replica selection; node is most likely bootstrapping
I180818 00:06:12.304590 1726 kv/dist_sender.go:345  [n?] unable to determine this node's attributes for replica selection; node is most likely bootstrapping
I180818 00:06:12.319397 982 kv/dist_sender.go:345  [n?] unable to determine this node's attributes for replica selection; node is most likely bootstrapping
I180818 00:06:12.326759 982 server/node.go:428  [n?] new node allocated ID 2
I180818 00:06:12.327183 982 gossip/gossip.go:383  [n2] NodeDescriptor set to node_id:2 address:<network_field:"tcp" address_field:"127.0.0.1:45793" > attrs:<> locality:<> ServerVersion:<major_val:2 minor_val:0 patch:0 unstable:11 > build_tag:"v2.1.0-alpha.20180702-1743-ge1e5a21" started_at:1534550772326862942 
I180818 00:06:12.327909 982 storage/stores.go:242  [n2] read 0 node addresses from persistent storage
I180818 00:06:12.328238 982 storage/stores.go:261  [n2] wrote 1 node addresses to persistent storage
I180818 00:06:12.337603 1792 storage/stores.go:261  [n1] wrote 1 node addresses to persistent storage
I180818 00:06:12.358062 982 server/node.go:672  [n2] bootstrapped store [n2,s2]
I180818 00:06:12.360570 982 server/node.go:546  [n2] node=2: started with [] engine(s) and attributes []
I180818 00:06:12.361913 982 server/status/recorder.go:652  [n2] available memory from cgroups (8.0 EiB) exceeds system memory 16 GiB, using system memory
I180818 00:06:12.362036 982 server/server.go:1806  [n2] Could not start heap profiler worker due to: directory to store profiles could not be determined
I180818 00:06:12.385870 1881 server/status/recorder.go:652  [n2,summaries] available memory from cgroups (8.0 EiB) exceeds system memory 16 GiB, using system memory
I180818 00:06:12.387959 982 server/server.go:1537  [n2] starting https server at 127.0.0.1:46489 (use: 127.0.0.1:46489)
I180818 00:06:12.388091 982 server/server.go:1539  [n2] starting grpc/postgres server at 127.0.0.1:45793
I180818 00:06:12.388148 982 server/server.go:1540  [n2] advertising CockroachDB node at 127.0.0.1:45793
I180818 00:06:12.399059 982 server/server.go:1593  [n2] done ensuring all necessary migrations have run
I180818 00:06:12.400341 982 server/server.go:1596  [n2] serving sql connections
I180818 00:06:12.482663 1954 storage/replica_raftstorage.go:529  [replicate,n1,s1,r9/1:/Table/1{2-3}] generated preemptive snapshot 0852e3ee at index 27
I180818 00:06:12.563998 1662 rpc/nodedialer/nodedialer.go:92  [ct-client] connection to n1 established
I180818 00:06:12.655714 1668 rpc/nodedialer/nodedialer.go:92  [ct-client] connection to n2 established
I180818 00:06:12.656486 1954 rpc/nodedialer/nodedialer.go:92  [replicate,n1,s1,r9/1:/Table/1{2-3}] connection to n2 established
I180818 00:06:12.679280 1954 storage/store_snapshot.go:655  [replicate,n1,s1,r9/1:/Table/1{2-3}] streamed snapshot to (n2,s2):?: kv pairs: 44, log entries: 17, rate-limit: 2.0 MiB/sec, 22ms
I180818 00:06:12.682123 1990 storage/replica_raftstorage.go:773  [n2,s2,r9/?:{-}] applying preemptive snapshot at index 27 (id=0852e3ee, encoded size=12161, 1 rocksdb batches, 17 log entries)
I180818 00:06:12.687136 1909 sql/event_log.go:126  [n2] Event: "node_join", target: 2, info: {Descriptor:{NodeID:2 Address:{NetworkField:tcp AddressField:127.0.0.1:45793} Attrs: Locality: ServerVersion:2.0-11 BuildTag:v2.1.0-alpha.20180702-1743-ge1e5a21 StartedAt:1534550772326862942 LocalityAddress:[]} ClusterID:b2bbeb3f-bde3-461c-8474-c9b0c8255863 StartedAt:1534550772326862942 LastUp:1534550772326862942}
I180818 00:06:12.688499 1990 storage/replica_raftstorage.go:779  [n2,s2,r9/?:/Table/1{2-3}] applied preemptive snapshot in 6ms [clear=0ms batch=0ms entries=4ms commit=1ms]
I180818 00:06:12.695462 1954 storage/replica_command.go:792  [replicate,n1,s1,r9/1:/Table/1{2-3}] change replicas (ADD_REPLICA (n2,s2):2): read existing descriptor r9:/Table/1{2-3} [(n1,s1):1, next=2, gen=842359605672]
I180818 00:06:12.704320 1907 server/server_update.go:67  [n2] no need to upgrade, cluster already at the newest version
I180818 00:06:12.725772 1954 storage/replica.go:3752  [n1,s1,r9/1:/Table/1{2-3}] proposing ADD_REPLICA((n2,s2):2): updated=[(n1,s1):1 (n2,s2):2] next=3
W180818 00:06:12.753370 982 server/status/runtime.go:294  [n?] Could not parse build timestamp: parsing time "" as "2006/01/02 15:04:05": cannot parse "" as "2006"
I180818 00:06:12.776050 2019 storage/replica_range_lease.go:554  [replicate,n1,s1,r9/1:/Table/1{2-3}] transferring lease to s2
I180818 00:06:12.777064 2020 util/protoutil/randnullability.go:94  inserting null for (roachpb.Lease).Expiration: false
I180818 00:06:12.777233 2020 util/protoutil/randnullability.go:94  inserting null for (roachpb.Lease).DeprecatedStartStasis: true
I180818 00:06:12.792772 2019 storage/replica_range_lease.go:617  [replicate,n1,s1,r9/1:/Table/1{2-3}] done transferring lease to s2: <nil>
I180818 00:06:12.795086 1801 storage/replica_proposal.go:214  [n2,s2,r9/2:/Table/1{2-3}] new range lease repl=(n2,s2):2 seq=3 start=1534550772.776219872,0 epo=1 pro=1534550772.776233380,0 following repl=(n1,s1):1 seq=2 start=1534550769.591447114,0 exp=1534550778.604684624,0 pro=1534550769.604734049,0
I180818 00:06:12.848994 1999 storage/replica_raftstorage.go:529  [replicate,n1,s1,r5/1:/System/ts{d-e}] generated preemptive snapshot 03a05015 at index 23
I180818 00:06:12.863284 982 server/server.go:829  [n?] monitoring forward clock jumps based on server.clock.forward_jump_check_enabled
I180818 00:06:12.864013 982 base/addr_validation.go:260  [n?] server certificate addresses: IP=127.0.0.1,::1; DNS=localhost,*.local; CN=node
I180818 00:06:12.864167 982 base/addr_validation.go:300  [n?] web UI certificate addresses: IP=127.0.0.1,::1; DNS=localhost,*.local; CN=node
I180818 00:06:12.901578 982 server/config.go:496  [n?] 1 storage engine initialized
I180818 00:06:12.901803 982 server/config.go:499  [n?] RocksDB cache size: 128 MiB
I180818 00:06:12.901857 982 server/config.go:499  [n?] store 0: in-memory, size 0 B
W180818 00:06:12.902181 982 gossip/gossip.go:1351  [n?] no incoming or outgoing connections
I180818 00:06:12.911600 1999 storage/store_snapshot.go:655  [replicate,n1,s1,r5/1:/System/ts{d-e}] streamed snapshot to (n2,s2):?: kv pairs: 932, log entries: 13, rate-limit: 2.0 MiB/sec, 58ms
I180818 00:06:12.914304 2024 storage/replica_raftstorage.go:773  [n2,s2,r5/?:{-}] applying preemptive snapshot at index 23 (id=03a05015, encoded size=151551, 1 rocksdb batches, 13 log entries)
I180818 00:06:12.922114 982 server/server.go:1402  [n?] no stores bootstrapped and --join flag specified, awaiting init command.
I180818 00:06:12.932479 2024 storage/replica_raftstorage.go:779  [n2,s2,r5/?:/System/ts{d-e}] applied preemptive snapshot in 18ms [clear=0ms batch=0ms entries=16ms commit=1ms]
I180818 00:06:12.959587 2013 storage/replica_command.go:792  [replicate,n2,s2,r9/2:/Table/1{2-3}] change replicas (REMOVE_REPLICA (n1,s1):1): read existing descriptor r9:/Table/1{2-3} [(n1,s1):1, (n2,s2):2, next=3, gen=842354386064]
I180818 00:06:12.968469 1999 storage/replica_command.go:792  [replicate,n1,s1,r5/1:/System/ts{d-e}] change replicas (ADD_REPLICA (n2,s2):2): read existing descriptor r5:/System/ts{d-e} [(n1,s1):1, next=2, gen=842370734904]
I180818 00:06:13.032258 1999 storage/replica.go:3752  [n1,s1,r5/1:/System/ts{d-e}] proposing ADD_REPLICA((n2,s2):2): updated=[(n1,s1):1 (n2,s2):2] next=3
I180818 00:06:13.092619 2087 storage/replica_range_lease.go:554  [replicate,n1,s1,r5/1:/System/ts{d-e}] transferring lease to s2
I180818 00:06:13.101608 1817 storage/replica_proposal.go:214  [n2,s2,r5/2:/System/ts{d-e}] new range lease repl=(n2,s2):2 seq=3 start=1534550773.092734398,0 epo=1 pro=1534550773.092747137,0 following repl=(n1,s1):1 seq=2 start=1534550769.591447114,0 exp=1534550778.604684624,0 pro=1534550769.604734049,0
I180818 00:06:13.102966 2013 storage/replica.go:3752  [n2,s2,r9/2:/Table/1{2-3}] proposing REMOVE_REPLICA((n1,s1):1): updated=[(n2,s2):2] next=3
I180818 00:06:13.110953 2087 storage/replica_range_lease.go:617  [replicate,n1,s1,r5/1:/System/ts{d-e}] done transferring lease to s2: <nil>
I180818 00:06:13.112926 2162 storage/replica_raftstorage.go:529  [replicate,n1,s1,r17/1:/Table/2{0-1}] generated preemptive snapshot fe00ec7d at index 18
I180818 00:06:13.148179 1678 gossip/client.go:129  [n?] started gossip client to 127.0.0.1:42519
I180818 00:06:13.150576 2162 storage/store_snapshot.go:655  [replicate,n1,s1,r17/1:/Table/2{0-1}] streamed snapshot to (n2,s2):?: kv pairs: 9, log entries: 8, rate-limit: 2.0 MiB/sec, 37ms
I180818 00:06:13.152698 2088 storage/replica_raftstorage.go:773  [n2,s2,r17/?:{-}] applying preemptive snapshot at index 18 (id=fe00ec7d, encoded size=2385, 1 rocksdb batches, 8 log entries)
I180818 00:06:13.154660 1760 storage/store.go:3663  [n1,s1,r9/1:/Table/1{2-3}] added to replica GC queue (peer suggestion)
I180818 00:06:13.155284 2088 storage/replica_raftstorage.go:779  [n2,s2,r17/?:/Table/2{0-1}] applied preemptive snapshot in 2ms [clear=0ms batch=0ms entries=1ms commit=0ms]
I180818 00:06:13.167971 2162 storage/replica_command.go:792  [replicate,n1,s1,r17/1:/Table/2{0-1}] change replicas (ADD_REPLICA (n2,s2):2): read existing descriptor r17:/Table/2{0-1} [(n1,s1):1, next=2, gen=842353918968]
I180818 00:06:13.177334 2152 gossip/server.go:219  [n1] received initial cluster-verification connection from {tcp 127.0.0.1:42189}
I180818 00:06:13.209719 982 server/node.go:697  [n?] connecting to gossip network to verify cluster ID...
I180818 00:06:13.210103 982 server/node.go:722  [n?] node connected via gossip and verified as part of cluster "b2bbeb3f-bde3-461c-8474-c9b0c8255863"
I180818 00:06:13.218237 2108 storage/store.go:2563  [replicaGC,n1,s1,r9/1:/Table/1{2-3}] removing replica
I180818 00:06:13.218903 2198 kv/dist_sender.go:345  [n?] unable to determine this node's attributes for replica selection; node is most likely bootstrapping
I180818 00:06:13.219967 2108 storage/replica.go:883  [replicaGC,n1,s1,r9/1:/Table/1{2-3}] removed 48 (40+8) keys in 1ms [clear=1ms commit=0ms]
I180818 00:06:13.241181 2197 kv/dist_sender.go:345  [n?] unable to determine this node's attributes for replica selection; node is most likely bootstrapping
I180818 00:06:13.242714 2162 storage/replica.go:3752  [n1,s1,r17/1:/Table/2{0-1}] proposing ADD_REPLICA((n2,s2):2): updated=[(n1,s1):1 (n2,s2):2] next=3
I180818 00:06:13.246178 2200 storage/replica_command.go:792  [replicate,n2,s2,r5/2:/System/ts{d-e}] change replicas (REMOVE_REPLICA (n1,s1):1): read existing descriptor r5:/System/ts{d-e} [(n1,s1):1, (n2,s2):2, next=3, gen=842354622576]
I180818 00:06:13.246490 982 kv/dist_sender.go:345  [n?] unable to determine this node's attributes for replica selection; node is most likely bootstrapping
I180818 00:06:13.259556 982 server/node.go:428  [n?] new node allocated ID 3
I180818 00:06:13.260083 982 gossip/gossip.go:383  [n3] NodeDescriptor set to node_id:3 address:<network_field:"tcp" address_field:"127.0.0.1:42189" > attrs:<> locality:<> ServerVersion:<major_val:2 minor_val:0 patch:0 unstable:11 > build_tag:"v2.1.0-alpha.20180702-1743-ge1e5a21" started_at:1534550773259703094 
I180818 00:06:13.263104 982 storage/stores.go:242  [n3] read 0 node addresses from persistent storage
I180818 00:06:13.263357 982 storage/stores.go:261  [n3] wrote 2 node addresses to persistent storage
I180818 00:06:13.267623 2202 storage/stores.go:261  [n1] wrote 2 node addresses to persistent storage
I180818 00:06:13.270405 2203 storage/replica_range_lease.go:554  [replicate,n1,s1,r17/1:/Table/2{0-1}] transferring lease to s2
I180818 00:06:13.271090 2169 storage/stores.go:261  [n2] wrote 2 node addresses to persistent storage
I180818 00:06:13.282520 2203 storage/replica_range_lease.go:617  [replicate,n1,s1,r17/1:/Table/2{0-1}] done transferring lease to s2: <nil>
I180818 00:06:13.285974 1835 storage/replica_proposal.go:214  [n2,s2,r17/2:/Table/2{0-1}] new range lease repl=(n2,s2):2 seq=3 start=1534550773.270550146,0 epo=1 pro=1534550773.270562662,0 following repl=(n1,s1):1 seq=2 start=1534550769.591447114,0 exp=1534550778.604684624,0 pro=1534550769.604734049,0
I180818 00:06:13.288989 2191 storage/replica_raftstorage.go:529  [replicate,n1,s1,r20/1:/Table/{23-50}] generated preemptive snapshot b278467e at index 18
I180818 00:06:13.297232 2191 storage/store_snapshot.go:655  [replicate,n1,s1,r20/1:/Table/{23-50}] streamed snapshot to (n2,s2):?: kv pairs: 13, log entries: 8, rate-limit: 2.0 MiB/sec, 8ms
I180818 00:06:13.300756 2242 storage/replica_raftstorage.go:773  [n2,s2,r20/?:{-}] applying preemptive snapshot at index 18 (id=b278467e, encoded size=2684, 1 rocksdb batches, 8 log entries)
I180818 00:06:13.303073 2242 storage/replica_raftstorage.go:779  [n2,s2,r20/?:/Table/{23-50}] applied preemptive snapshot in 2ms [clear=0ms batch=0ms entries=1ms commit=0ms]
I180818 00:06:13.322375 2191 storage/replica_command.go:792  [replicate,n1,s1,r20/1:/Table/{23-50}] change replicas (ADD_REPLICA (n2,s2):2): read existing descriptor r20:/Table/{23-50} [(n1,s1):1, next=2, gen=842376276416]
I180818 00:06:13.331742 982 server/node.go:672  [n3] bootstrapped store [n3,s3]
I180818 00:06:13.335684 982 server/node.go:546  [n3] node=3: started with [] engine(s) and attributes []
I180818 00:06:13.338621 982 server/status/recorder.go:652  [n3] available memory from cgroups (8.0 EiB) exceeds system memory 16 GiB, using system memory
I180818 00:06:13.338770 982 server/server.go:1806  [n3] Could not start heap profiler worker due to: directory to store profiles could not be determined
I180818 00:06:13.345237 2346 server/status/recorder.go:652  [n3,summaries] available memory from cgroups (8.0 EiB) exceeds system memory 16 GiB, using system memory
I180818 00:06:13.355715 982 server/server.go:1537  [n3] starting https server at 127.0.0.1:42365 (use: 127.0.0.1:42365)
I180818 00:06:13.355877 982 server/server.go:1539  [n3] starting grpc/postgres server at 127.0.0.1:42189
I180818 00:06:13.355930 982 server/server.go:1540  [n3] advertising CockroachDB node at 127.0.0.1:42189
I180818 00:06:13.386990 982 server/server.go:1593  [n3] done ensuring all necessary migrations have run
I180818 00:06:13.387139 982 server/server.go:1596  [n3] serving sql connections
I180818 00:06:13.420896 2200 storage/replica.go:3752  [n2,s2,r5/2:/System/ts{d-e}] proposing REMOVE_REPLICA((n1,s1):1): updated=[(n2,s2):2] next=3
I180818 00:06:13.469128 2191 storage/replica.go:3752  [n1,s1,r20/1:/Table/{23-50}] proposing ADD_REPLICA((n2,s2):2): updated=[(n1,s1):1 (n2,s2):2] next=3
I180818 00:06:13.519834 2391 storage/store.go:2563  [replicaGC,n1,s1,r5/1:/System/ts{d-e}] removing replica
I180818 00:06:13.522499 2249 rpc/nodedialer/nodedialer.go:92  [ct-client] connection to n1 established
I180818 00:06:13.525429 2391 storage/replica.go:883  [replicaGC,n1,s1,r5/1:/System/ts{d-e}] removed 470 (461+9) keys in 5ms [clear=1ms commit=4ms]
I180818 00:06:13.553395 2333 storage/replica_range_lease.go:554  [replicate,n1,s1,r20/1:/Table/{23-50}] transferring lease to s2
I180818 00:06:13.571738 2333 storage/replica_range_lease.go:617  [replicate,n1,s1,r20/1:/Table/{23-50}] done transferring lease to s2: <nil>
I180818 00:06:13.574289 2450 storage/replica_raftstorage.go:529  [replicate,n1,s1,r3/1:/System/NodeLiveness{-Max}] generated preemptive snapshot 11fabbe1 at index 24
I180818 00:06:13.594671 1854 storage/replica_proposal.go:214  [n2,s2,r20/2:/Table/{23-50}] new range lease repl=(n2,s2):2 seq=3 start=1534550773.553867512,0 epo=1 pro=1534550773.553880785,0 following repl=(n1,s1):1 seq=2 start=1534550769.591447114,0 exp=1534550778.604684624,0 pro=1534550769.604734049,0
I180818 00:06:13.614294 2375 storage/replica_command.go:792  [replicate,n2,s2,r17/2:/Table/2{0-1}] change replicas (REMOVE_REPLICA (n1,s1):1): read existing descriptor r17:/Table/2{0-1} [(n1,s1):1, (n2,s2):2, next=3, gen=842353369848]
I180818 00:06:13.711925 2250 rpc/nodedialer/nodedialer.go:92  [ct-client] connection to n2 established
I180818 00:06:13.732513 2308 server/server_update.go:67  [n3] no need to upgrade, cluster already at the newest version
I180818 00:06:13.830247 2310 sql/event_log.go:126  [n3] Event: "node_join", target: 3, info: {Descriptor:{NodeID:3 Address:{NetworkField:tcp AddressField:127.0.0.1:42189} Attrs: Locality: ServerVersion:2.0-11 BuildTag:v2.1.0-alpha.20180702-1743-ge1e5a21 StartedAt:1534550773259703094 LocalityAddress:[]} ClusterID:b2bbeb3f-bde3-461c-8474-c9b0c8255863 StartedAt:1534550773259703094 LastUp:1534550773259703094}
I180818 00:06:13.871798 2450 rpc/nodedialer/nodedialer.go:92  [replicate,n1,s1,r3/1:/System/NodeLiveness{-Max}] connection to n3 established
I180818 00:06:13.873128 2397 rpc/nodedialer/nodedialer.go:92  [ct-client] connection to n3 established
I180818 00:06:13.890434 2450 storage/store_snapshot.go:655  [replicate,n1,s1,r3/1:/System/NodeLiveness{-Max}] streamed snapshot to (n3,s3):?: kv pairs: 13, log entries: 14, rate-limit: 2.0 MiB/sec, 18ms
I180818 00:06:13.891949 2334 rpc/nodedialer/nodedialer.go:92  [ct-client] connection to n3 established
I180818 00:06:13.895585 2550 storage/replica_raftstorage.go:773  [n3,s3,r3/?:{-}] applying preemptive snapshot at index 24 (id=11fabbe1, encoded size=5327, 1 rocksdb batches, 14 log entries)
I180818 00:06:13.903366 2550 storage/replica_raftstorage.go:779  [n3,s3,r3/?:/System/NodeLiveness{-Max}] applied preemptive snapshot in 8ms [clear=0ms batch=0ms entries=6ms commit=0ms]
I180818 00:06:13.909694 2375 storage/replica.go:3752  [n2,s2,r17/2:/Table/2{0-1}] proposing REMOVE_REPLICA((n1,s1):1): updated=[(n2,s2):2] next=3
I180818 00:06:13.910709 2450 storage/replica_command.go:792  [replicate,n1,s1,r3/1:/System/NodeLiveness{-Max}] change replicas (ADD_REPLICA (n3,s3):2): read existing descriptor r3:/System/NodeLiveness{-Max} [(n1,s1):1, next=2, gen=842372427072]
I180818 00:06:13.931946 2525 storage/replica_command.go:792  [replicate,n2,s2,r20/2:/Table/{23-50}] change replicas (REMOVE_REPLICA (n1,s1):1): read existing descriptor r20:/Table/{23-50} [(n1,s1):1, (n2,s2):2, next=3, gen=842395161544]
I180818 00:06:13.932829 1760 storage/store.go:3663  [n1,s1,r17/1:/Table/2{0-1}] added to replica GC queue (peer suggestion)
I180818 00:06:13.956689 2450 storage/replica.go:3752  [n1,s1,r3/1:/System/NodeLiveness{-Max}] proposing ADD_REPLICA((n3,s3):2): updated=[(n1,s1):1 (n3,s3):2] next=3
I180818 00:06:14.011816 2476 storage/replica_range_lease.go:554  [replicate,n1,s1,r3/1:/System/NodeLiveness{-Max}] transferring lease to s3
I180818 00:06:14.016395 2531 storage/store.go:2563  [replicaGC,n1,s1,r17/1:/Table/2{0-1}] removing replica
I180818 00:06:14.017895 2531 storage/replica.go:883  [replicaGC,n1,s1,r17/1:/Table/2{0-1}] removed 8 (0+8) keys in 1ms [clear=0ms commit=0ms]
I180818 00:06:14.035931 2476 storage/replica_range_lease.go:617  [replicate,n1,s1,r3/1:/System/NodeLiveness{-Max}] done transferring lease to s3: <nil>
I180818 00:06:14.038240 2611 storage/replica_raftstorage.go:529  [replicate,n1,s1,r13/1:/Table/1{6-7}] generated preemptive snapshot 378190d9 at index 18
I180818 00:06:14.062984 2611 storage/store_snapshot.go:655  [replicate,n1,s1,r13/1:/Table/1{6-7}] streamed snapshot to (n3,s3):?: kv pairs: 9, log entries: 8, rate-limit: 2.0 MiB/sec, 24ms
I180818 00:06:14.067706 2538 storage/replica_raftstorage.go:773  [n3,s3,r13/?:{-}] applying preemptive snapshot at index 18 (id=378190d9, encoded size=2552, 1 rocksdb batches, 8 log entries)
I180818 00:06:14.070230 2538 storage/replica_raftstorage.go:779  [n3,s3,r13/?:/Table/1{6-7}] applied preemptive snapshot in 2ms [clear=0ms batch=0ms entries=1ms commit=0ms]
I180818 00:06:14.075803 2525 storage/replica.go:3752  [n2,s2,r20/2:/Table/{23-50}] proposing REMOVE_REPLICA((n1,s1):1): updated=[(n2,s2):2] next=3
I180818 00:06:14.082372 2611 storage/replica_command.go:792  [replicate,n1,s1,r13/1:/Table/1{6-7}] change replicas (ADD_REPLICA (n3,s3):2): read existing descriptor r13:/Table/1{6-7} [(n1,s1):1, next=2, gen=842350794360]
I180818 00:06:14.083478 2463 sql/event_log.go:126  [n1,client=127.0.0.1:47910,user=root] Event: "create_database", target: 52, info: {DatabaseName:data Statement:CREATE DATABASE data User:root}
I180818 00:06:14.100854 1760 storage/store.go:3663  [n1,s1,r20/1:/Table/{23-50}] added to replica GC queue (peer suggestion)
I180818 00:06:14.104870 2542 storage/replica_command.go:282  [split,n1,s1,r22/1:/{Table/51-Max}] initiating a split of this range at key /Table/52 [r23]
I180818 00:06:14.107953 2429 storage/store.go:2563  [replicaGC,n1,s1,r20/1:/Table/{23-50}] removing replica
I180818 00:06:14.113394 2429 storage/replica.go:883  [replicaGC,n1,s1,r20/1:/Table/{23-50}] removed 12 (4+8) keys in 4ms [clear=0ms commit=4ms]
I180818 00:06:14.136052 2605 storage/replica_command.go:792  [replicate,n3,s3,r3/2:/System/NodeLiveness{-Max}] change replicas (REMOVE_REPLICA (n1,s1):1): read existing descriptor r3:/System/NodeLiveness{-Max} [(n1,s1):1, (n3,s3):2, next=3, gen=842372281264]
I180818 00:06:14.225212 2611 storage/replica.go:3752  [n1,s1,r13/1:/Table/1{6-7}] proposing ADD_REPLICA((n3,s3):2): updated=[(n1,s1):1 (n3,s3):2] next=3
I180818 00:06:14.296177 2463 sql/event_log.go:126  [n1,client=127.0.0.1:47910,user=root] Event: "set_cluster_setting", target: 0, info: {SettingName:server.declined_reservation_timeout Value:0s User:root}
I180818 00:06:14.318461 2605 storage/replica.go:3752  [n3,s3,r3/2:/System/NodeLiveness{-Max}] proposing REMOVE_REPLICA((n1,s1):1): updated=[(n3,s3):2] next=3
I180818 00:06:14.318600 2697 storage/replica_range_lease.go:554  [replicate,n1,s1,r13/1:/Table/1{6-7}] transferring lease to s3
I180818 00:06:14.354318 2697 storage/replica_range_lease.go:617  [replicate,n1,s1,r13/1:/Table/1{6-7}] done transferring lease to s3: <nil>
I180818 00:06:14.355025 2239 storage/replica_proposal.go:214  [n3,s3,r13/2:/Table/1{6-7}] new range lease repl=(n3,s3):2 seq=3 start=1534550774.318742232,0 epo=1 pro=1534550774.318758787,0 following repl=(n1,s1):1 seq=2 start=1534550769.591447114,0 exp=1534550778.604684624,0 pro=1534550769.604734049,0
I180818 00:06:14.356686 2620 storage/replica_raftstorage.go:529  [replicate,n1,s1,r19/1:/Table/2{2-3}] generated preemptive snapshot 31c60d4e at index 18
I180818 00:06:14.359232 2496 storage/store.go:3663  [n1,s1,r3/1:/System/NodeLiveness{-Max}] added to replica GC queue (peer suggestion)
I180818 00:06:14.360784 2699 storage/store.go:2563  [replicaGC,n1,s1,r3/1:/System/NodeLiveness{-Max}] removing replica
I180818 00:06:14.362156 2699 storage/replica.go:883  [replicaGC,n1,s1,r3/1:/System/NodeLiveness{-Max}] removed 12 (3+9) keys in 1ms [clear=0ms commit=0ms]
I180818 00:06:14.366968 2623 storage/replica_raftstorage.go:773  [n3,s3,r19/?:{-}] applying preemptive snapshot at index 18 (id=31c60d4e, encoded size=2385, 1 rocksdb batches, 8 log entries)
I180818 00:06:14.369355 2623 storage/replica_raftstorage.go:779  [n3,s3,r19/?:/Table/2{2-3}] applied preemptive snapshot in 2ms [clear=0ms batch=0ms entries=1ms commit=0ms]
I180818 00:06:14.369588 2620 storage/store_snapshot.go:655  [replicate,n1,s1,r19/1:/Table/2{2-3}] streamed snapshot to (n3,s3):?: kv pairs: 9, log entries: 8, rate-limit: 2.0 MiB/sec, 6ms
I180818 00:06:14.376708 2620 storage/replica_command.go:792  [replicate,n1,s1,r19/1:/Table/2{2-3}] change replicas (ADD_REPLICA (n3,s3):2): read existing descriptor r19:/Table/2{2-3} [(n1,s1):1, next=2, gen=842405910040]
I180818 00:06:14.442115 2463 sql/event_log.go:126  [n1,client=127.0.0.1:47910,user=root] Event: "set_cluster_setting", target: 0, info: {SettingName:server.failed_reservation_timeout Value:0s User:root}
I180818 00:06:14.464551 2620 storage/replica.go:3752  [n1,s1,r19/1:/Table/2{2-3}] proposing ADD_REPLICA((n3,s3):2): updated=[(n1,s1):1 (n3,s3):2] next=3
--- FAIL: testrace/TestInitialPartitioning: TestInitialPartitioning/DATE (47.030s)
soon.go:49: condition failed to evaluate within 45s: expected to scan on n1: SELECT count(*) FROM "DATE" WHERE a < '1992-05-25':::DATE
    [n2,s2,r7/2:/Table/{SystemCon…-11}] read completed
    full trace:
       === SPAN START: session recording ===
      [n1,client=127.0.0.1:47910,user=root] [NoTxn pos:4186] executing ExecStmt: SELECT count(*) FROM "DATE" WHERE a < '1992-05-25':::DATE
      [n1,client=127.0.0.1:47910,user=root] executing: SELECT count(*) FROM "DATE" WHERE a < '1992-05-25':::DATE in state: NoTxn
       === SPAN START: sql txn ===
      [n1,client=127.0.0.1:47910,user=root] [Open pos:4186] executing ExecStmt: SELECT count(*) FROM "DATE" WHERE a < '1992-05-25':::DATE
      [n1,client=127.0.0.1:47910,user=root] executing: SELECT count(*) FROM "DATE" WHERE a < '1992-05-25':::DATE in state: Open
      [n1,client=127.0.0.1:47910,user=root] planning starts: SELECT
      [n1,client=127.0.0.1:47910,user=root] generating optimizer plan
       === SPAN START: txn coordinator send ===
       === SPAN START: dist sender send ===
      [client=127.0.0.1:47910,user=root,txn=e53e0852,n1] querying next range at /Table/2/1/52/"DATE"/3/1
      [client=127.0.0.1:47910,user=root,txn=e53e0852,n1] r7: sending batch 1 Get to (n2,s2):2
      [client=127.0.0.1:47910,user=root,txn=e53e0852,n1] sending request to 127.0.0.1:45793
       === SPAN START: /cockroach.roachpb.Internal/Batch ===
       === SPAN START: /cockroach.roachpb.Internal/Batch ===
      [n2] 1 Get
      [n2,s2] executing 1 requests
      [n2,s2,r7/2:/Table/{SystemCon…-11}] read-only path
      [n2,s2,r7/2:/Table/{SystemCon…-11}] command queue
      [n2,s2,r7/2:/Table/{SystemCon…-11}] waiting for read lock
      [n2,s2,r7/2:/Table/{SystemCon…-11}] read completed
       === SPAN START: txn coordinator send ===
      [n1,client=127.0.0.1:47910,user=root] client.Txn did AutoCommit. err: <nil>

      [n1,client=127.0.0.1:47910,user=root] added table 'data.public."DATE"' to table collection
      [n1,client=127.0.0.1:47910,user=root] optimizer plan succeeded
      [n1,client=127.0.0.1:47910,user=root] planning ends
      [n1,client=127.0.0.1:47910,user=root] checking distributability
      [n1,client=127.0.0.1:47910,user=root] distributable plan: true
      [n1,client=127.0.0.1:47910,user=root] execution starts: distributed
       === SPAN START: consuming rows ===
      [n1,client=127.0.0.1:47910,user=root] creating DistSQL plan with distributedMode=true
      [n1,client=127.0.0.1:47910,user=root] querying next range at /Table/84/1
      [n1,client=127.0.0.1:47910,user=root] running DistSQL plan
       === SPAN START: flow ===
      [n1,client=127.0.0.1:47910,user=root] starting (1 processors, 0 startables)
       === SPAN START: table reader ===
    cockroach.processorid: 0
    cockroach.stat.tablereader.input.rows: 0
    cockroach.stat.tablereader.stalltime: 2µs
      [n1,client=127.0.0.1:47910,user=root] Scan /Table/84/1{-/8179/#}
       === SPAN START: txn coordinator send ===
       === SPAN START: dist sender send ===
      [client=127.0.0.1:47910,user=root,txn=6a30dd05,n1] querying next range at /Table/84/1
      [client=127.0.0.1:47910,user=root,txn=6a30dd05,n1] r251: sending batch 1 Scan to (n1,s1):2
      [client=127.0.0.1:47910,user=root,txn=6a30dd05,n1] sending request to local server
       === SPAN START: /cockroach.roachpb.Internal/Batch ===
      [n1] 1 Scan
      [n1,s1] executing 1 requests
      [n1,s1,r251/2:/Table/84/1{-/8180}] read-only path
      [n1,s1,r251/2:/Table/84/1{-/8180}] read has no clock uncertainty
      [n1,s1,r251/2:/Table/84/1{-/8180}] command queue
      [n1,s1,r251/2:/Table/84/1{-/8180}] waiting for read lock
      [n1,s1,r251/2:/Table/84/1{-/8180}] read completed
       === SPAN START: ordered aggregator ===
    cockroach.processorid: 1
    cockroach.stat.aggregator.mem.max: 10 KiB
    cockroach.stat.aggregator.input.rows: 0
    cockroach.stat.aggregator.stalltime: 282µs
      [n1,client=127.0.0.1:47910,user=root] accumulation complete
      [n1,client=127.0.0.1:47910,user=root] exiting aggregator
      [n1,client=127.0.0.1:47910,user=root] execution ends
      [n1,client=127.0.0.1:47910,user=root] rows affected: 1
       === SPAN START: txn coordinator send ===
      [n1,client=127.0.0.1:47910,user=root] AutoCommit. err: <nil>
      [n1,client=127.0.0.1:47910,user=root] releasing 1 tables
      [n1,client=127.0.0.1:47910,user=root] [NoTxn pos:4187] executing ExecStmt: SET TRACING = off
      [n1,client=127.0.0.1:47910,user=root] executing: SET TRACING = off in state: NoTxn
    goroutine 31124 [running]:
    runtime/debug.Stack(0xa7a358200, 0xc421f53140, 0x3d9f000)
        /usr/local/go/src/runtime/debug/stack.go:24 +0xb5
    github.com/cockroachdb/cockroach/pkg/testutils.SucceedsSoon(0x3e00420, 0xc424879e00, 0xc421f52f90)
        /go/src/github.com/cockroachdb/cockroach/pkg/testutils/soon.go:50 +0x172
    github.com/cockroachdb/cockroach/pkg/ccl/partitionccl.TestInitialPartitioning.func1(0xc424879e00)
        /go/src/github.com/cockroachdb/cockroach/pkg/ccl/partitionccl/partition_test.go:1156 +0x258
    testing.tRunner(0xc424879e00, 0xc4217c2870)
        /usr/local/go/src/testing/testing.go:777 +0x16e
    created by testing.(*T).Run
        /usr/local/go/src/testing/testing.go:824 +0x565
------- Stdout: -------
I180818 00:08:35.806269 31008 storage/replica_command.go:792  [replicate,n2,s2,r86/2:/Table/64{-/1}] change replicas (REMOVE_REPLICA (n1,s1):1): read existing descriptor r86:/Table/64{-/1} [(n1,s1):1, (n2,s2):2, next=3, gen=842376341336]
I180818 00:08:35.937160 31143 storage/replica_command.go:792  [replicate,n1,s1,r249/2:/Table/83/{1/"\x01…-2}] change replicas (REMOVE_REPLICA (n3,s3):1): read existing descriptor r249:/Table/83/{1/"\x01i\x0f\x00\xb3\x0f\x86ҍ\x90[J,\x9a\xf9~G\x89"/PrefixEnd-2} [(n3,s3):1, (n1,s1):2, next=3, gen=0]
I180818 00:08:36.021943 31008 storage/replica.go:3752  [n2,s2,r86/2:/Table/64{-/1}] proposing REMOVE_REPLICA((n1,s1):1): updated=[(n2,s2):2] next=3
I180818 00:08:36.042073 2463 sql/event_log.go:126  [n1,client=127.0.0.1:47910,user=root] Event: "create_table", target: 84, info: {TableName:data.public."DATE" Statement:CREATE TABLE "DATE" (a DATE PRIMARY KEY) PARTITION BY LIST (a) (PARTITION p VALUES IN ('1992-05-25':::DATE)) User:root}
I180818 00:08:36.046891 1760 storage/store.go:3663  [n1,s1,r86/1:/Table/64{-/1}] added to replica GC queue (peer suggestion)
I180818 00:08:36.091480 31115 storage/replica_command.go:792  [replicate,n2,s2,r248/2:/Table/83/1/"\x01i\x0f\x00\xb…] change replicas (REMOVE_REPLICA (n3,s3):1): read existing descriptor r248:/Table/83/1/"\x01i\x0f\x00\xb3\x0f\x86ҍ\x90[J,\x9a\xf9~G\x89"{-/PrefixEnd} [(n3,s3):1, (n2,s2):2, next=3, gen=842412919360]
I180818 00:08:36.094276 31159 storage/store.go:2563  [replicaGC,n1,s1,r86/1:/Table/64{-/1}] removing replica
I180818 00:08:36.096442 31159 storage/replica.go:883  [replicaGC,n1,s1,r86/1:/Table/64{-/1}] removed 8 (0+8) keys in 1ms [clear=0ms commit=0ms]
I180818 00:08:36.128977 31161 storage/replica_command.go:282  [split,n3,s3,r247/1:/{Table/83/2-Max}] initiating a split of this range at key /Table/84 [r250]
I180818 00:08:36.269371 31115 storage/replica.go:3752  [n2,s2,r248/2:/Table/83/1/"\x01i\x0f\x00\xb…] proposing REMOVE_REPLICA((n3,s3):1): updated=[(n2,s2):2] next=3
I180818 00:08:36.311915 8171 storage/store.go:3663  [n3,s3,r248/1:/Table/83/1/"\x01i\x0f\x00\xb…] added to replica GC queue (peer suggestion)
I180818 00:08:36.349664 31207 storage/store.go:2563  [replicaGC,n3,s3,r248/1:/Table/83/1/"\x01i\x0f\x00\xb…] removing replica
I180818 00:08:36.352580 31207 storage/replica.go:883  [replicaGC,n3,s3,r248/1:/Table/83/1/"\x01i\x0f\x00\xb…] removed 8 (0+8) keys in 1ms [clear=0ms commit=0ms]
I180818 00:08:36.424011 31143 storage/replica.go:3752  [n1,s1,r249/2:/Table/83/{1/"\x01…-2}] proposing REMOVE_REPLICA((n3,s3):1): updated=[(n1,s1):2] next=3
I180818 00:08:36.454497 2597 storage/store.go:3663  [n3,s3,r249/1:/Table/83/{1/"\x01…-2}] added to replica GC queue (peer suggestion)
I180818 00:08:36.509581 31211 storage/store.go:2563  [replicaGC,n3,s3,r249/1:/Table/83/{1/"\x01…-2}] removing replica
I180818 00:08:36.512420 31211 storage/replica.go:883  [replicaGC,n3,s3,r249/1:/Table/83/{1/"\x01…-2}] removed 8 (0+8) keys in 1ms [clear=0ms commit=0ms]
I180818 00:08:36.581680 2222 storage/replica_proposal.go:214  [n3,s3,r247/1:/{Table/83/2-Max}] new range lease repl=(n3,s3):1 seq=6 start=1534550816.180248848,0 epo=1 pro=1534550816.180333920,0 following repl=(n3,s3):1 seq=6 start=1534550816.180248848,0 epo=1 pro=1534550816.180333920,0
I180818 00:08:36.830426 2463 sql/event_log.go:126  [n1,client=127.0.0.1:47910,user=root] Event: "set_zone_config", target: 84, info: {Target:data."DATE"@primary Config:constraints: [+n1] User:root}
I180818 00:08:36.840986 31302 storage/replica_command.go:282  [split,n3,s3,r250/1:/{Table/84-Max}] initiating a split of this range at key /Table/84/1 [r251]
I180818 00:08:37.103543 2240 storage/replica_proposal.go:214  [n3,s3,r250/1:/{Table/84-Max}] new range lease repl=(n3,s3):1 seq=6 start=1534550816.180248848,0 epo=1 pro=1534550816.180333920,0 following repl=(n3,s3):1 seq=6 start=1534550816.180248848,0 epo=1 pro=1534550816.180333920,0
I180818 00:08:37.116045 31333 storage/replica_command.go:282  [split,n3,s3,r251/1:/{Table/84/1-Max}] initiating a split of this range at key /Table/84/2 [r252]
I180818 00:08:37.312576 2232 storage/replica_proposal.go:214  [n3,s3,r251/1:/{Table/84/1-Max}] new range lease repl=(n3,s3):1 seq=6 start=1534550816.180248848,0 epo=1 pro=1534550816.180333920,0 following repl=(n3,s3):1 seq=6 start=1534550816.180248848,0 epo=1 pro=1534550816.180333920,0
I180818 00:08:37.347042 31338 storage/replica_raftstorage.go:529  [replicate,n3,s3,r251/1:/Table/84/{1-2}] generated preemptive snapshot 92e845f7 at index 15
I180818 00:08:37.361501 31338 storage/store_snapshot.go:655  [replicate,n3,s3,r251/1:/Table/84/{1-2}] streamed snapshot to (n1,s1):?: kv pairs: 9, log entries: 5, rate-limit: 2.0 MiB/sec, 13ms
I180818 00:08:37.379870 2463 sql/event_log.go:126  [n1,client=127.0.0.1:47910,user=root] Event: "set_zone_config", target: 84, info: {Target:data."DATE".p Config:constraints: [+n2] User:root}
I180818 00:08:37.380627 31124 ccl/partitionccl/partition_test.go:200  query: SELECT count(*) FROM "DATE" WHERE a < '1992-05-25':::DATE
I180818 00:08:37.385155 31411 storage/replica_raftstorage.go:773  [n1,s1,r251/?:{-}] applying preemptive snapshot at index 15 (id=92e845f7, encoded size=2349, 1 rocksdb batches, 5 log entries)
I180818 00:08:37.418347 31411 storage/replica_raftstorage.go:779  [n1,s1,r251/?:/Table/84/{1-2}] applied preemptive snapshot in 33ms [clear=0ms batch=3ms entries=1ms commit=0ms]
I180818 00:08:37.423854 31338 storage/replica_command.go:792  [replicate,n3,s3,r251/1:/Table/84/{1-2}] change replicas (ADD_REPLICA (n1,s1):2): read existing descriptor r251:/Table/84/{1-2} [(n3,s3):1, next=2, gen=842402911264]
I180818 00:08:37.461625 31428 storage/replica_command.go:282  [split,n3,s3,r251/1:/Table/84/{1-2}] initiating a split of this range at key /Table/84/1/8180 [r253]
I180818 00:08:37.577607 31338 storage/replica.go:3752  [n3,s3,r251/1:/Table/84/{1-2}] proposing ADD_REPLICA((n1,s1):2): updated=[(n3,s3):1 (n1,s1):2] next=3
I180818 00:08:37.618249 31428 storage/split_queue.go:135  [split,n3,s3,r251/1:/Table/84/{1-2}] split saw concurrent descriptor modification; maybe retrying
I180818 00:08:37.619698 31393 storage/replica_command.go:282  [split,n3,s3,r251/1:/Table/84/{1-2}] initiating a split of this range at key /Table/84/1/8180 [r254]
I180818 00:08:37.834700 2270 storage/replica_proposal.go:214  [n3,s3,r251/1:/Table/84/{1-2}] new range lease repl=(n3,s3):1 seq=6 start=1534550816.180248848,0 epo=1 pro=1534550816.180333920,0 following repl=(n3,s3):1 seq=6 start=1534550816.180248848,0 epo=1 pro=1534550816.180333920,0
I180818 00:08:37.843104 31493 storage/replica_command.go:282  [split,n3,s3,r254/1:/Table/84/{1/8180-2}] initiating a split of this range at key /Table/84/1/8181 [r255]
I180818 00:08:37.844580 31494 storage/replica_range_lease.go:554  [replicate,n3,s3,r251/1:/Table/84/1{-/8180}] transferring lease to s1
I180818 00:08:37.871339 31494 storage/replica_range_lease.go:617  [replicate,n3,s3,r251/1:/Table/84/1{-/8180}] done transferring lease to s1: <nil>
I180818 00:08:37.875075 1204 storage/replica_proposal.go:214  [n1,s1,r251/2:/Table/84/1{-/8180}] new range lease repl=(n1,s1):2 seq=7 start=1534550917.844729955,0 epo=1 pro=1534550917.844759650,0 following repl=(n3,s3):1 seq=6 start=1534550816.180248848,0 epo=1 pro=1534550816.180333920,0
I180818 00:08:38.019162 31124 ccl/partitionccl/partition_test.go:200  query: SELECT count(*) FROM "DATE" WHERE a < '1992-05-25':::DATE
I180818 00:08:38.072845 2237 storage/replica_proposal.go:214  [n3,s3,r254/1:/Table/84/{1/8180-2}] new range lease repl=(n3,s3):1 seq=6 start=1534550816.180248848,0 epo=1 pro=1534550816.180333920,0 following repl=(n3,s3):1 seq=6 start=1534550816.180248848,0 epo=1 pro=1534550816.180333920,0
I180818 00:08:38.100018 31499 storage/replica_command.go:792  [replicate,n3,s3,r254/1:/Table/84/1/818{0-1}] change replicas (REMOVE_REPLICA (n1,s1):2): read existing descriptor r254:/Table/84/1/818{0-1} [(n3,s3):1, (n1,s1):2, next=3, gen=842416524120]
I180818 00:08:38.107542 31124 ccl/partitionccl/partition_test.go:200  query: SELECT count(*) FROM "DATE" WHERE a = '1992-05-25':::DATE
I180818 00:08:38.182471 31124 ccl/partitionccl/partition_test.go:200  query: SELECT count(*) FROM "DATE" WHERE a < '1992-05-25':::DATE
I180818 00:08:38.223046 31545 storage/replica_command.go:792  [replicate,n1,s1,r251/2:/Table/84/1{-/8180}] change replicas (REMOVE_REPLICA (n3,s3):1): read existing descriptor r251:/Table/84/1{-/8180} [(n3,s3):1, (n1,s1):2, next=3, gen=842402918496]
I180818 00:08:38.238248 31499 storage/replica.go:3752  [n3,s3,r254/1:/Table/84/1/818{0-1}] proposing REMOVE_REPLICA((n1,s1):2): updated=[(n3,s3):1] next=3
I180818 00:08:38.281599 31124 ccl/partitionccl/partition_test.go:200  query: SELECT count(*) FROM "DATE" WHERE a = '1992-05-25':::DATE
I180818 00:08:38.335284 2496 storage/store.go:3663  [n1,s1,r254/2:/Table/84/1/818{0-1}] added to replica GC queue (peer suggestion)
I180818 00:08:38.396523 31620 storage/store.go:2563  [replicaGC,n1,s1,r254/2:/Table/84/1/818{0-1}] removing replica
I180818 00:08:38.398315 31124 ccl/partitionccl/partition_test.go:200  query: SELECT count(*) FROM "DATE" WHERE a < '1992-05-25':::DATE
I180818 00:08:38.399980 31620 storage/replica.go:883  [replicaGC,n1,s1,r254/2:/Table/84/1/818{0-1}] removed 7 (0+7) keys in 1ms [clear=1ms commit=0ms]
E180818 00:08:38.417069 31569 storage/queue.go:788  [replicate,n3,s3,r255/1:/Table/84/{1/8181-2}] no removable replicas from range that needs a removal: [raft progress unknown]
I180818 00:08:38.427351 31580 storage/replica_raftstorage.go:529  [replicate,n3,s3,r254/1:/Table/84/1/818{0-1}] generated preemptive snapshot 0b4e4f4c at index 19
I180818 00:08:38.440096 31580 storage/store_snapshot.go:655  [replicate,n3,s3,r254/1:/Table/84/1/818{0-1}] streamed snapshot to (n2,s2):?: kv pairs: 9, log entries: 9, rate-limit: 2.0 MiB/sec, 12ms
I180818 00:08:38.449599 31612 storage/replica_raftstorage.go:773  [n2,s2,r254/?:{-}] applying preemptive snapshot at index 19 (id=0b4e4f4c, encoded size=3687, 1 rocksdb batches, 9 log entries)
I180818 00:08:38.463106 31612 storage/replica_raftstorage.go:779  [n2,s2,r254/?:/Table/84/1/818{0-1}] applied preemptive snapshot in 13ms [clear=0ms batch=0ms entries=2ms commit=10ms]
I180818 00:08:38.472792 31580 storage/replica_command.go:792  [replicate,n3,s3,r254/1:/Table/84/1/818{0-1}] change replicas (ADD_REPLICA (n2,s2):3): read existing descriptor r254:/Table/84/1/818{0-1} [(n3,s3):1, next=3, gen=842424506408]
I180818 00:08:38.475321 31124 ccl/partitionccl/partition_test.go:200  query: SELECT count(*) FROM "DATE" WHERE a = '1992-05-25':::DATE
I180818 00:08:38.517243 31545 storage/replica.go:3752  [n1,s1,r251/2:/Table/84/1{-/8180}] proposing REMOVE_REPLICA((n3,s3):1): updated=[(n1,s1):2] next=3
I180818 00:08:38.537271 2597 storage/store.go:3663  [n3,s3,r251/1:/Table/84/1{-/8180}] added to replica GC queue (peer suggestion)
I180818 00:08:38.537565 31124 ccl/partitionccl/partition_test.go:200  query: SELECT count(*) FROM "DATE" WHERE a < '1992-05-25':::DATE
I180818 00:08:38.548133 31645 storage/store.go:2563  [replicaGC,n3,s3,r251/1:/Table/84/1{-/8180}] removing replica
I180818 00:08:38.551068 31645 storage/replica.go:883  [replicaGC,n3,s3,r251/1:/Table/84/1{-/8180}] removed 8 (0+8) keys in 1ms [clear=0ms commit=0ms]
I180818 00:08:38.576773 31124 ccl/partitionccl/partition_test.go:200  query: SELECT count(*) FROM "DATE" WHERE a = '1992-05-25':::DATE
I180818 00:08:38.661447 31124 ccl/partitionccl/partition_test.go:200  query: SELECT count(*) FROM "DATE" WHERE a < '1992-05-25':::DATE
I180818 00:08:38.686859 31580 storage/replica.go:3752  [n3,s3,r254/1:/Table/84/1/818{0-1}] proposing ADD_REPLICA((n2,s2):3): updated=[(n3,s3):1 (n2,s2):3] next=4
I180818 00:08:38.717510 31124 ccl/partitionccl/partition_test.go:200  query: SELECT count(*) FROM "DATE" WHERE a = '1992-05-25':::DATE
I180818 00:08:38.818995 31586 storage/replica_range_lease.go:554  [replicate,n3,s3,r254/1:/Table/84/1/818{0-1}] transferring lease to s2
I180818 00:08:38.842098 1806 storage/replica_proposal.go:214  [n2,s2,r254/3:/Table/84/1/818{0-1}] new range lease repl=(n2,s2):3 seq=7 start=1534550918.819163191,0 epo=1 pro=1534550918.819192488,0 following repl=(n3,s3):1 seq=6 start=1534550816.180248848,0 epo=1 pro=1534550816.180333920,0
I180818 00:08:38.846026 31124 ccl/partitionccl/partition_test.go:200  query: SELECT count(*) FROM "DATE" WHERE a < '1992-05-25':::DATE
I180818 00:08:38.857753 31586 storage/replica_range_lease.go:617  [replicate,n3,s3,r254/1:/Table/84/1/818{0-1}] done transferring lease to s2: <nil>
I180818 00:08:38.916462 31124 ccl/partitionccl/partition_test.go:200  query: SELECT count(*) FROM "DATE" WHERE a = '1992-05-25':::DATE
I180818 00:08:39.042074 31124 ccl/partitionccl/partition_test.go:200  query: SELECT count(*) FROM "DATE" WHERE a IN ('1992-05-25':::DATE)
I180818 00:08:39.193662 31124 ccl/partitionccl/partition_test.go:200  query: SELECT count(*) FROM "DATE" WHERE a > '1992-05-25':::DATE
I180818 00:08:39.369374 31124 ccl/partitionccl/partition_test.go:200  query: SELECT count(*) FROM "DATE" WHERE a > '1992-05-25':::DATE
I180818 00:08:39.392800 31768 storage/replica_command.go:792  [replicate,n2,s2,r254/3:/Table/84/1/818{0-1}] change replicas (REMOVE_REPLICA (n3,s3):1): read existing descriptor r254:/Table/84/1/818{0-1} [(n3,s3):1, (n2,s2):3, next=4, gen=842404917360]
I180818 00:08:39.466654 31124 ccl/partitionccl/partition_test.go:200  query: SELECT count(*) FROM "DATE" WHERE a < '1992-05-25':::DATE
I180818 00:08:39.493359 31124 ccl/partitionccl/partition_test.go:200  query: SELECT count(*) FROM "DATE" WHERE a = '1992-05-25':::DATE
I180818 00:08:39.641426 1252 server/status/runtime.go:433  [n1] runtime stats: 977 MiB RSS, 653 goroutines, 40 MiB/19 MiB/83 MiB GO alloc/idle/total, 109 MiB/148 MiB CGO alloc/total, 3160.68cgo/sec, 1.77/0.10 %(u/s)time, 0.01 %gc (13x)
I180818 00:08:39.682824 31768 storage/replica.go:3752  [n2,s2,r254/3:/Table/84/1/818{0-1}] proposing REMOVE_REPLICA((n3,s3):1): updated=[(n2,s2):3] next=4
I180818 00:08:39.691099 31124 ccl/partitionccl/partition_test.go:200  query: SELECT count(*) FROM "DATE" WHERE a IN ('1992-05-25':::DATE)
I180818 00:08:39.738309 31861 storage/store.go:2563  [replicaGC,n3,s3,r254/1:/Table/84/1/818{0-1}] removing replica
I180818 00:08:39.741403 8171 storage/store.go:3663  [n3,s3,r254/1:/Table/84/1/818{0-1}] added to replica GC queue (peer suggestion)
I180818 00:08:39.741982 31861 storage/replica.go:883  [replicaGC,n3,s3,r254/1:/Table/84/1/818{0-1}] removed 8 (0+8) keys in 1ms [clear=0ms commit=1ms]
I180818 00:08:39.843187 1254 server/status/recorder.go:652  [n1,summaries] available memory from cgroups (8.0 EiB) exceeds system memory 16 GiB, using system memory
I180818 00:08:39.966853 31949 storage/replica_range_lease.go:554  [replicate,n3,s3,r255/1:/Table/84/{1/8181-2}] transferring lease to s1
I180818 00:08:40.027864 31124 ccl/partitionccl/partition_test.go:200  query: SELECT count(*) FROM "DATE" WHERE a > '1992-05-25':::DATE
I180818 00:08:40.178815 31949 storage/replica_range_lease.go:617  [replicate,n3,s3,r255/1:/Table/84/{1/8181-2}] done transferring lease to s1: <nil>
I180818 00:08:40.188861 1195 storage/replica_proposal.go:214  [n1,s1,r255/2:/Table/84/{1/8181-2}] new range lease repl=(n1,s1):2 seq=7 start=1534550919.967013297,0 epo=1 pro=1534550919.967042023,0 following repl=(n3,s3):1 seq=6 start=1534550816.180248848,0 epo=1 pro=1534550816.180333920,0
I180818 00:08:40.331650 31124 ccl/partitionccl/partition_test.go:200  query: SELECT count(*) FROM "DATE" WHERE a < '1992-05-25':::DATE
I180818 00:08:40.528803 31124 ccl/partitionccl/partition_test.go:1156  SucceedsSoon: expected to scan on n1: SELECT count(*) FROM "DATE" WHERE a < '1992-05-25':::DATE
[n2,s2,r7/2:/Table/{SystemCon…-11}] read completed
full trace:
   === SPAN START: session recording ===
  [n1,client=127.0.0.1:47910,user=root] [NoTxn pos:3760] executing ExecStmt: SELECT count(*) FROM "DATE" WHERE a < '1992-05-25':::DATE
  [n1,client=127.0.0.1:47910,user=root] executing: SELECT count(*) FROM "DATE" WHERE a < '1992-05-25':::DATE in state: NoTxn
   === SPAN START: sql txn ===
  [n1,client=127.0.0.1:47910,user=root] [Open pos:3760] executing ExecStmt: SELECT count(*) FROM "DATE" WHERE a < '1992-05-25':::DATE
  [n1,client=127.0.0.1:47910,user=root] executing: SELECT count(*) FROM "DATE" WHERE a < '1992-05-25':::DATE in state: Open
  [n1,client=127.0.0.1:47910,user=root] planning starts: SELECT
  [n1,client=127.0.0.1:47910,user=root] generating optimizer plan
   === SPAN START: txn coordinator send ===
   === SPAN START: dist sender send ===
  [client=127.0.0.1:47910,user=root,txn=7a58cc9f,n1] querying next range at /Table/2/1/52/"DATE"/3/1
  [client=127.0.0.1:47910,user=root,txn=7a58cc9f,n1] r7: sending batch 1 Get to (n2,s2):2
  [client=127.0.0.1:47910,user=root,txn=7a58cc9f,n1] sending request to 127.0.0.1:45793
   === SPAN START: /cockroach.roachpb.Internal/Batch ===
   === SPAN START: /cockroach.roachpb.Internal/Batch ===
  [n2] 1 Get
  [n2,s2] executing 1 requests
  [n2,s2,r7/2:/Table/{SystemCon…-11}] read-only path
  [n2,s2,r7/2:/Table/{SystemCon…-11}] command queue
  [n2,s2,r7/2:/Table/{SystemCon…-11}] waiting for read lock
  [n2,s2,r7/2:/Table/{SystemCon…-11}] read completed
   === SPAN START: txn coordinator send ===
  [n1,client=127.0.0.1:47910,user=root] client.Txn did AutoCommit. err: <nil>

  [n1,client=127.0.0.1:47910,user=root] added table 'data.public."DATE"' to table collection
  [n1,client=127.0.0.1:47910,user=root] optimizer plan succeeded
  [n1,client=127.0.0.1:47910,user=root] planning ends
  [n1,client=127.0.0.1:47910,user=root] checking distributability
  [n1,client=127.0.0.1:47910,user=root] distributable plan: true
  [n1,client=127.0.0.1:47910,user=root] execution starts: distributed
   === SPAN START: consuming rows ===
  [n1,client=127.0.0.1:47910,user=root] creating DistSQL plan with distributedMode=true
  [n1,client=127.0.0.1:47910,user=root] querying next range at /Table/84/1
  [n1,client=127.0.0.1:47910,user=root] running DistSQL plan
   === SPAN START: flow ===
  [n1,client=127.0.0.1:47910,user=root] starting (1 processors, 0 startables)
   === SPAN START: table reader ===
cockroach.stat.tablereader.stalltime: 2µs
cockroach.processorid: 0
cockroach.stat.tablereader.input.rows: 0
  [n1,client=127.0.0.1:47910,user=root] Scan /Table/84/1{-/8179/#}
   === SPAN START: txn coordinator send ===
   === SPAN START: dist sender send ===
  [client=127.0.0.1:47910,user=root,txn=ee483df6,n1] querying next range at /Table/84/1
  [client=127.0.0.1:47910,user=root,txn=ee483df6,n1] r251: sending batch 1 Scan to (n1,s1):2
  [client=127.0.0.1:47910,user=root,txn=ee483df6,n1] sending request to local server
   === SPAN START: /cockroach.roachpb.Internal/Batch ===
  [n1] 1 Scan
  [n1,s1] executing 1 requests
  [n1,s1,r251/2:/Table/84/1{-/8180}] read-only path
  [n1,s1,r251/2:/Table/84/1{-/8180}] read has no clock uncertainty
  [n1,s1,r251/2:/Table/84/1{-/8180}] command queue
  [n1,s1,r251/2:/Table/84/1{-/8180}] waiting for read lock
  [n1,s1,r251/2:/Table/84/1{-/8180}] read completed
   === SPAN START: ordered aggregator ===
cockroach.stat.aggregator.mem.max: 10 KiB
cockroach.processorid: 1
cockroach.stat.aggregator.input.rows: 0
cockroach.stat.aggregator.stalltime: 348µs
  [n1,client=127.0.0.1:47910,user=root] accumulation complete
  [n1,client=127.0.0.1:47910,user=root] exiting aggregator
  [n1,client=127.0.0.1:47910,user=root] execution ends
  [n1,client=127.0.0.1:47910,user=root] rows affected: 1
   === SPAN START: txn coordinator send ===
  [n1,client=127.0.0.1:47910,user=root] AutoCommit. err: <nil>
  [n1,client=127.0.0.1:47910,user=root] releasing 1 tables
  [n1,client=127.0.0.1:47910,user=root] [NoTxn pos:3761] executing ExecStmt: SET TRACING = off
  [n1,client=127.0.0.1:47910,user=root] executing: SET TRACING = off in state: NoTxn
I180818 00:08:40.529174 31124 ccl/partitionccl/partition_test.go:200  query: SELECT count(*) FROM "DATE" WHERE a < '1992-05-25':::DATE
I180818 00:08:40.579289 32131 storage/replica_command.go:792  [replicate,n1,s1,r255/2:/Table/84/{1/8181-2}] change replicas (REMO

Please assign, take a look and update the issue accordingly.

a-robinson commented 6 years ago

Dupe of https://github.com/cockroachdb/cockroach/issues/28789