cockroachdb / cockroach

CockroachDB — the cloud native, distributed SQL database designed for high availability, effortless scale, and control over data placement.
https://www.cockroachlabs.com
Other
30.01k stars 3.79k forks source link

teamcity: failed tests on master: testrace/TestRepartitioning, testrace/TestInitialPartitioning #28820

Closed cockroach-teamcity closed 6 years ago

cockroach-teamcity commented 6 years ago

The following tests appear to have failed:

#848248:

--- FAIL: testrace/TestInitialPartitioning (194.980s)

------- Stdout: -------
W180819 19:25:59.473542 982 server/status/runtime.go:294  [n?] Could not parse build timestamp: parsing time "" as "2006/01/02 15:04:05": cannot parse "" as "2006"
I180819 19:25:59.596154 982 server/server.go:829  [n?] monitoring forward clock jumps based on server.clock.forward_jump_check_enabled
I180819 19:25:59.596682 982 base/addr_validation.go:260  [n?] server certificate addresses: IP=127.0.0.1,::1; DNS=localhost,*.local; CN=node
I180819 19:25:59.596796 982 base/addr_validation.go:300  [n?] web UI certificate addresses: IP=127.0.0.1,::1; DNS=localhost,*.local; CN=node
I180819 19:25:59.617073 982 server/config.go:496  [n?] 1 storage engine initialized
I180819 19:25:59.618108 982 server/config.go:499  [n?] RocksDB cache size: 128 MiB
I180819 19:25:59.618194 982 server/config.go:499  [n?] store 0: in-memory, size 0 B
I180819 19:25:59.680588 982 server/node.go:373  [n?] **** cluster 57064f73-a007-4a8c-940e-19e3fd7442da has been created
I180819 19:25:59.680701 982 server/server.go:1400  [n?] **** add additional nodes by specifying --join=127.0.0.1:33541
I180819 19:25:59.681618 982 gossip/gossip.go:383  [n1] NodeDescriptor set to node_id:1 address:<network_field:"tcp" address_field:"127.0.0.1:33541" > attrs:<> locality:<> ServerVersion:<major_val:2 minor_val:0 patch:0 unstable:11 > build_tag:"v2.1.0-alpha.20180702-1775-g9146fae" started_at:1534706759681285293 
I180819 19:25:59.710785 982 storage/store.go:1509  [n1,s1] [n1,s1]: failed initial metrics computation: [n1,s1]: system config not yet available
I180819 19:25:59.712712 982 server/node.go:476  [n1] initialized store [n1,s1]: disk (capacity=512 MiB, available=512 MiB, used=0 B, logicalBytes=6.9 KiB), ranges=1, leases=1, queries=0.00, writes=0.00, bytesPerReplica={p10=7103.00 p25=7103.00 p50=7103.00 p75=7103.00 p90=7103.00 pMax=7103.00}, writesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}
I180819 19:25:59.713039 982 storage/stores.go:242  [n1] read 0 node addresses from persistent storage
I180819 19:25:59.714837 982 server/node.go:697  [n1] connecting to gossip network to verify cluster ID...
I180819 19:25:59.715197 982 server/node.go:722  [n1] node connected via gossip and verified as part of cluster "57064f73-a007-4a8c-940e-19e3fd7442da"
I180819 19:25:59.715595 982 server/node.go:546  [n1] node=1: started with [n1=<in-mem>] engine(s) and attributes []
I180819 19:25:59.730750 982 server/status/recorder.go:652  [n1] available memory from cgroups (8.0 EiB) exceeds system memory 16 GiB, using system memory
I180819 19:25:59.730876 982 server/server.go:1806  [n1] Could not start heap profiler worker due to: directory to store profiles could not be determined
I180819 19:25:59.732735 982 server/server.go:1537  [n1] starting https server at 127.0.0.1:35875 (use: 127.0.0.1:35875)
I180819 19:25:59.732873 982 server/server.go:1539  [n1] starting grpc/postgres server at 127.0.0.1:33541
I180819 19:25:59.732926 982 server/server.go:1540  [n1] advertising CockroachDB node at 127.0.0.1:33541
W180819 19:25:59.733152 982 jobs/registry.go:308  [n1] unable to get node liveness: node not in the liveness table
I180819 19:25:59.738916 1019 storage/replica_command.go:282  [split,n1,s1,r1/1:/M{in-ax}] initiating a split of this range at key /System/"" [r2]
I180819 19:25:59.748126 1236 server/status/recorder.go:652  [n1,summaries] available memory from cgroups (8.0 EiB) exceeds system memory 16 GiB, using system memory
I180819 19:25:59.803942 736 storage/replica_command.go:282  [split,n1,s1,r2/1:/{System/-Max}] initiating a split of this range at key /System/NodeLiveness [r3]
I180819 19:25:59.877248 1225 storage/replica_command.go:282  [split,n1,s1,r3/1:/{System/NodeL…-Max}] initiating a split of this range at key /System/NodeLivenessMax [r4]
W180819 19:25:59.902812 1282 storage/intent_resolver.go:668  [n1,s1] failed to push during intent resolution: failed to push "sql txn" id=87bad65c key=/Table/SystemConfigSpan/Start rw=true pri=0.02449485 iso=SERIALIZABLE stat=PENDING epo=0 ts=1534706759.762461044,0 orig=1534706759.762461044,0 max=1534706759.762461044,0 wto=false rop=false seq=6
I180819 19:25:59.920100 1130 sql/event_log.go:126  [n1,intExec=optInToDiagnosticsStatReporting] Event: "set_cluster_setting", target: 0, info: {SettingName:diagnostics.reporting.enabled Value:true User:root}
I180819 19:25:59.968120 1266 rpc/nodedialer/nodedialer.go:92  [consistencyChecker,n1,s1,r1/1:/{Min-System/}] connection to n1 established
I180819 19:25:59.971706 1210 storage/replica_command.go:282  [split,n1,s1,r4/1:/{System/NodeL…-Max}] initiating a split of this range at key /System/tsd [r5]
I180819 19:26:00.278753 1287 storage/replica_command.go:282  [split,n1,s1,r5/1:/{System/tsd-Max}] initiating a split of this range at key /System/"tse" [r6]
I180819 19:26:00.313327 1286 storage/replica_consistency.go:129  [consistencyChecker,n1,s1,r4/1:/System/{NodeLive…-tsd}] triggering stats recomputation to resolve delta of {ContainsEstimates:true LastUpdateNanos:1534706760024238379 IntentAge:0 GCBytesAge:0 LiveBytes:-22043 LiveCount:-461 KeyBytes:-21468 KeyCount:-461 ValBytes:-575 ValCount:-461 IntentBytes:0 IntentCount:0 SysBytes:0 SysCount:0}
I180819 19:26:00.376306 1233 sql/event_log.go:126  [n1,intExec=set-setting] Event: "set_cluster_setting", target: 0, info: {SettingName:version Value:$1 User:root}
I180819 19:26:00.418687 1362 storage/replica_command.go:282  [split,n1,s1,r6/1:/{System/tse-Max}] initiating a split of this range at key /Table/SystemConfigSpan/Start [r7]
I180819 19:26:00.505597 1279 storage/replica_command.go:282  [split,n1,s1,r7/1:/{Table/System…-Max}] initiating a split of this range at key /Table/11 [r8]
I180819 19:26:00.546062 1289 sql/event_log.go:126  [n1,intExec=disableNetTrace] Event: "set_cluster_setting", target: 0, info: {SettingName:trace.debug.enable Value:false User:root}
I180819 19:26:00.693713 1386 storage/replica_command.go:282  [split,n1,s1,r8/1:/{Table/11-Max}] initiating a split of this range at key /Table/12 [r9]
I180819 19:26:00.720552 1381 sql/event_log.go:126  [n1,intExec=initializeClusterSecret] Event: "set_cluster_setting", target: 0, info: {SettingName:cluster.secret Value:gen_random_uuid()::STRING User:root}
I180819 19:26:00.802484 1368 storage/replica_command.go:282  [split,n1,s1,r9/1:/{Table/12-Max}] initiating a split of this range at key /Table/13 [r10]
I180819 19:26:00.877093 1402 sql/event_log.go:126  [n1,intExec=create-default-db] Event: "create_database", target: 50, info: {DatabaseName:defaultdb Statement:CREATE DATABASE IF NOT EXISTS defaultdb User:root}
I180819 19:26:00.879571 1313 storage/replica_command.go:282  [split,n1,s1,r10/1:/{Table/13-Max}] initiating a split of this range at key /Table/14 [r11]
I180819 19:26:00.947629 1427 sql/event_log.go:126  [n1,intExec=create-default-db] Event: "create_database", target: 51, info: {DatabaseName:postgres Statement:CREATE DATABASE IF NOT EXISTS postgres User:root}
I180819 19:26:00.962144 1450 storage/replica_command.go:282  [split,n1,s1,r11/1:/{Table/14-Max}] initiating a split of this range at key /Table/15 [r12]
I180819 19:26:00.984409 982 server/server.go:1593  [n1] done ensuring all necessary migrations have run
I180819 19:26:00.984546 982 server/server.go:1596  [n1] serving sql connections
I180819 19:26:01.020345 1265 server/server_update.go:67  [n1] no need to upgrade, cluster already at the newest version
I180819 19:26:01.027801 1475 sql/event_log.go:126  [n1] Event: "node_join", target: 1, info: {Descriptor:{NodeID:1 Address:{NetworkField:tcp AddressField:127.0.0.1:33541} Attrs: Locality: ServerVersion:2.0-11 BuildTag:v2.1.0-alpha.20180702-1775-g9146fae StartedAt:1534706759681285293 LocalityAddress:[]} ClusterID:57064f73-a007-4a8c-940e-19e3fd7442da StartedAt:1534706759681285293 LastUp:1534706759681285293}
I180819 19:26:01.050887 1463 storage/replica_command.go:282  [split,n1,s1,r12/1:/{Table/15-Max}] initiating a split of this range at key /Table/16 [r13]
I180819 19:26:01.110050 1471 storage/replica_command.go:282  [split,n1,s1,r13/1:/{Table/16-Max}] initiating a split of this range at key /Table/17 [r14]
I180819 19:26:01.207339 1432 storage/replica_command.go:282  [split,n1,s1,r14/1:/{Table/17-Max}] initiating a split of this range at key /Table/18 [r15]
I180819 19:26:01.294056 1544 storage/replica_command.go:282  [split,n1,s1,r15/1:/{Table/18-Max}] initiating a split of this range at key /Table/19 [r16]
I180819 19:26:01.364206 1518 storage/replica_command.go:282  [split,n1,s1,r16/1:/{Table/19-Max}] initiating a split of this range at key /Table/20 [r17]
I180819 19:26:01.401534 1605 storage/replica_command.go:282  [split,n1,s1,r17/1:/{Table/20-Max}] initiating a split of this range at key /Table/21 [r18]
I180819 19:26:01.468703 1495 storage/replica_command.go:282  [split,n1,s1,r18/1:/{Table/21-Max}] initiating a split of this range at key /Table/22 [r19]
I180819 19:26:01.541437 1615 storage/replica_command.go:282  [split,n1,s1,r19/1:/{Table/22-Max}] initiating a split of this range at key /Table/23 [r20]
I180819 19:26:01.634512 1668 storage/replica_command.go:282  [split,n1,s1,r20/1:/{Table/23-Max}] initiating a split of this range at key /Table/50 [r21]
I180819 19:26:01.733307 1686 storage/replica_command.go:282  [split,n1,s1,r21/1:/{Table/50-Max}] initiating a split of this range at key /Table/51 [r22]
W180819 19:26:02.411718 982 server/status/runtime.go:294  [n?] Could not parse build timestamp: parsing time "" as "2006/01/02 15:04:05": cannot parse "" as "2006"
I180819 19:26:02.481825 982 server/server.go:829  [n?] monitoring forward clock jumps based on server.clock.forward_jump_check_enabled
I180819 19:26:02.482543 982 base/addr_validation.go:260  [n?] server certificate addresses: IP=127.0.0.1,::1; DNS=localhost,*.local; CN=node
I180819 19:26:02.490385 982 base/addr_validation.go:300  [n?] web UI certificate addresses: IP=127.0.0.1,::1; DNS=localhost,*.local; CN=node
I180819 19:26:02.495634 982 server/config.go:496  [n?] 1 storage engine initialized
I180819 19:26:02.495721 982 server/config.go:499  [n?] RocksDB cache size: 128 MiB
I180819 19:26:02.495760 982 server/config.go:499  [n?] store 0: in-memory, size 0 B
W180819 19:26:02.496055 982 gossip/gossip.go:1351  [n?] no incoming or outgoing connections
I180819 19:26:02.496695 982 server/server.go:1402  [n?] no stores bootstrapped and --join flag specified, awaiting init command.
I180819 19:26:02.651542 1708 gossip/client.go:129  [n?] started gossip client to 127.0.0.1:33541
I180819 19:26:02.653944 1754 gossip/server.go:219  [n1] received initial cluster-verification connection from {tcp 127.0.0.1:35725}
I180819 19:26:02.658373 982 server/node.go:697  [n?] connecting to gossip network to verify cluster ID...
I180819 19:26:02.658604 982 server/node.go:722  [n?] node connected via gossip and verified as part of cluster "57064f73-a007-4a8c-940e-19e3fd7442da"
I180819 19:26:02.661364 1680 kv/dist_sender.go:345  [n?] unable to determine this node's attributes for replica selection; node is most likely bootstrapping
I180819 19:26:02.669472 1679 kv/dist_sender.go:345  [n?] unable to determine this node's attributes for replica selection; node is most likely bootstrapping
I180819 19:26:02.675702 982 kv/dist_sender.go:345  [n?] unable to determine this node's attributes for replica selection; node is most likely bootstrapping
I180819 19:26:02.680896 982 server/node.go:428  [n?] new node allocated ID 2
I180819 19:26:02.681355 982 gossip/gossip.go:383  [n2] NodeDescriptor set to node_id:2 address:<network_field:"tcp" address_field:"127.0.0.1:35725" > attrs:<> locality:<> ServerVersion:<major_val:2 minor_val:0 patch:0 unstable:11 > build_tag:"v2.1.0-alpha.20180702-1775-g9146fae" started_at:1534706762681034696 
I180819 19:26:02.682085 982 storage/stores.go:242  [n2] read 0 node addresses from persistent storage
I180819 19:26:02.682397 982 storage/stores.go:261  [n2] wrote 1 node addresses to persistent storage
I180819 19:26:02.695132 1713 storage/stores.go:261  [n1] wrote 1 node addresses to persistent storage
I180819 19:26:02.713715 982 server/node.go:672  [n2] bootstrapped store [n2,s2]
I180819 19:26:02.729349 982 server/node.go:546  [n2] node=2: started with [] engine(s) and attributes []
I180819 19:26:02.730572 982 server/status/recorder.go:652  [n2] available memory from cgroups (8.0 EiB) exceeds system memory 16 GiB, using system memory
I180819 19:26:02.730674 982 server/server.go:1806  [n2] Could not start heap profiler worker due to: directory to store profiles could not be determined
I180819 19:26:02.732729 982 server/server.go:1537  [n2] starting https server at 127.0.0.1:37759 (use: 127.0.0.1:37759)
I180819 19:26:02.732813 982 server/server.go:1539  [n2] starting grpc/postgres server at 127.0.0.1:35725
I180819 19:26:02.732856 982 server/server.go:1540  [n2] advertising CockroachDB node at 127.0.0.1:35725
I180819 19:26:02.746739 1777 server/status/recorder.go:652  [n2,summaries] available memory from cgroups (8.0 EiB) exceeds system memory 16 GiB, using system memory
I180819 19:26:02.747333 982 server/server.go:1593  [n2] done ensuring all necessary migrations have run
I180819 19:26:02.747564 982 server/server.go:1596  [n2] serving sql connections
I180819 19:26:02.789060 1886 storage/replica_raftstorage.go:529  [replicate,n1,s1,r20/1:/Table/{23-50}] generated preemptive snapshot b2785608 at index 18
I180819 19:26:02.926518 1912 rpc/nodedialer/nodedialer.go:92  [ct-client] connection to n1 established
W180819 19:26:02.986769 982 server/status/runtime.go:294  [n?] Could not parse build timestamp: parsing time "" as "2006/01/02 15:04:05": cannot parse "" as "2006"
I180819 19:26:02.997976 1893 server/server_update.go:67  [n2] no need to upgrade, cluster already at the newest version
I180819 19:26:03.005581 1895 sql/event_log.go:126  [n2] Event: "node_join", target: 2, info: {Descriptor:{NodeID:2 Address:{NetworkField:tcp AddressField:127.0.0.1:35725} Attrs: Locality: ServerVersion:2.0-11 BuildTag:v2.1.0-alpha.20180702-1775-g9146fae StartedAt:1534706762681034696 LocalityAddress:[]} ClusterID:57064f73-a007-4a8c-940e-19e3fd7442da StartedAt:1534706762681034696 LastUp:1534706762681034696}
I180819 19:26:03.034640 1943 rpc/nodedialer/nodedialer.go:92  [ct-client] connection to n2 established
I180819 19:26:03.038567 1886 rpc/nodedialer/nodedialer.go:92  [replicate,n1,s1,r20/1:/Table/{23-50}] connection to n2 established
I180819 19:26:03.043368 1886 storage/store_snapshot.go:655  [replicate,n1,s1,r20/1:/Table/{23-50}] streamed snapshot to (n2,s2):?: kv pairs: 12, log entries: 8, rate-limit: 2.0 MiB/sec, 3ms
I180819 19:26:03.045321 1945 storage/replica_raftstorage.go:773  [n2,s2,r20/?:{-}] applying preemptive snapshot at index 18 (id=b2785608, encoded size=2379, 1 rocksdb batches, 8 log entries)
I180819 19:26:03.048155 1945 storage/replica_raftstorage.go:779  [n2,s2,r20/?:/Table/{23-50}] applied preemptive snapshot in 2ms [clear=0ms batch=0ms entries=1ms commit=1ms]
I180819 19:26:03.050989 1886 storage/replica_command.go:792  [replicate,n1,s1,r20/1:/Table/{23-50}] change replicas (ADD_REPLICA (n2,s2):2): read existing descriptor r20:/Table/{23-50} [(n1,s1):1, next=2, gen=842354760640]
I180819 19:26:03.113399 1886 storage/replica.go:3749  [n1,s1,r20/1:/Table/{23-50}] proposing ADD_REPLICA((n2,s2):2): updated=[(n1,s1):1 (n2,s2):2] next=3
I180819 19:26:03.118166 982 server/server.go:829  [n?] monitoring forward clock jumps based on server.clock.forward_jump_check_enabled
I180819 19:26:03.119450 982 base/addr_validation.go:260  [n?] server certificate addresses: IP=127.0.0.1,::1; DNS=localhost,*.local; CN=node
I180819 19:26:03.119774 982 base/addr_validation.go:300  [n?] web UI certificate addresses: IP=127.0.0.1,::1; DNS=localhost,*.local; CN=node
I180819 19:26:03.133720 982 server/config.go:496  [n?] 1 storage engine initialized
I180819 19:26:03.133822 982 server/config.go:499  [n?] RocksDB cache size: 128 MiB
I180819 19:26:03.133880 982 server/config.go:499  [n?] store 0: in-memory, size 0 B
W180819 19:26:03.134108 982 gossip/gossip.go:1351  [n?] no incoming or outgoing connections
I180819 19:26:03.134680 982 server/server.go:1402  [n?] no stores bootstrapped and --join flag specified, awaiting init command.
I180819 19:26:03.171322 1936 storage/replica_range_lease.go:554  [replicate,n1,s1,r20/1:/Table/{23-50}] transferring lease to s2
I180819 19:26:03.172605 2016 util/protoutil/randnullability.go:94  inserting null for (roachpb.Lease).Expiration: true
I180819 19:26:03.172738 2016 util/protoutil/randnullability.go:94  inserting null for (roachpb.Lease).DeprecatedStartStasis: true
I180819 19:26:03.179166 1742 storage/replica_proposal.go:214  [n2,s2,r20/2:/Table/{23-50}] new range lease repl=(n2,s2):2 seq=3 start=1534706763.171502616,0 epo=1 pro=1534706763.171514358,0 following repl=(n1,s1):1 seq=2 start=1534706759.685018191,0 exp=1534706768.699386229,0 pro=1534706759.699433636,0
I180819 19:26:03.182857 1936 storage/replica_range_lease.go:617  [replicate,n1,s1,r20/1:/Table/{23-50}] done transferring lease to s2: <nil>
I180819 19:26:03.187280 1985 storage/replica_raftstorage.go:529  [replicate,n1,s1,r22/1:/{Table/51-Max}] generated preemptive snapshot 30f758ce at index 14
I180819 19:26:03.194539 1985 storage/store_snapshot.go:655  [replicate,n1,s1,r22/1:/{Table/51-Max}] streamed snapshot to (n2,s2):?: kv pairs: 7, log entries: 4, rate-limit: 2.0 MiB/sec, 7ms
I180819 19:26:03.198250 2050 storage/replica_raftstorage.go:773  [n2,s2,r22/?:{-}] applying preemptive snapshot at index 14 (id=30f758ce, encoded size=604, 1 rocksdb batches, 4 log entries)
I180819 19:26:03.199707 2050 storage/replica_raftstorage.go:779  [n2,s2,r22/?:/{Table/51-Max}] applied preemptive snapshot in 1ms [clear=0ms batch=0ms entries=0ms commit=0ms]
I180819 19:26:03.204367 1985 storage/replica_command.go:792  [replicate,n1,s1,r22/1:/{Table/51-Max}] change replicas (ADD_REPLICA (n2,s2):2): read existing descriptor r22:/{Table/51-Max} [(n1,s1):1, next=2, gen=0]
I180819 19:26:03.233364 1985 storage/replica.go:3749  [n1,s1,r22/1:/{Table/51-Max}] proposing ADD_REPLICA((n2,s2):2): updated=[(n1,s1):1 (n2,s2):2] next=3
I180819 19:26:03.261252 1953 storage/replica_range_lease.go:554  [replicate,n1,s1,r22/1:/{Table/51-Max}] transferring lease to s2
I180819 19:26:03.268166 2049 storage/replica_command.go:792  [replicate,n2,s2,r20/2:/Table/{23-50}] change replicas (REMOVE_REPLICA (n1,s1):1): read existing descriptor r20:/Table/{23-50} [(n1,s1):1, (n2,s2):2, next=3, gen=842354677200]
I180819 19:26:03.268519 1953 storage/replica_range_lease.go:617  [replicate,n1,s1,r22/1:/{Table/51-Max}] done transferring lease to s2: <nil>
I180819 19:26:03.270372 1802 storage/replica_proposal.go:214  [n2,s2,r22/2:/{Table/51-Max}] new range lease repl=(n2,s2):2 seq=3 start=1534706763.261370304,0 epo=1 pro=1534706763.261383279,0 following repl=(n1,s1):1 seq=2 start=1534706759.685018191,0 exp=1534706768.699386229,0 pro=1534706759.699433636,0
I180819 19:26:03.279684 2085 storage/replica_raftstorage.go:529  [replicate,n1,s1,r11/1:/Table/1{4-5}] generated preemptive snapshot cf47dade at index 19
I180819 19:26:03.287474 2085 storage/store_snapshot.go:655  [replicate,n1,s1,r11/1:/Table/1{4-5}] streamed snapshot to (n2,s2):?: kv pairs: 8, log entries: 9, rate-limit: 2.0 MiB/sec, 7ms
I180819 19:26:03.291382 2146 storage/replica_raftstorage.go:773  [n2,s2,r11/?:{-}] applying preemptive snapshot at index 19 (id=cf47dade, encoded size=2781, 1 rocksdb batches, 9 log entries)
I180819 19:26:03.299309 2030 gossip/client.go:129  [n?] started gossip client to 127.0.0.1:33541
I180819 19:26:03.300293 2146 storage/replica_raftstorage.go:779  [n2,s2,r11/?:/Table/1{4-5}] applied preemptive snapshot in 9ms [clear=0ms batch=0ms entries=8ms commit=0ms]
I180819 19:26:03.308324 2117 gossip/server.go:219  [n1] received initial cluster-verification connection from {tcp 127.0.0.1:43107}
I180819 19:26:03.308556 2085 storage/replica_command.go:792  [replicate,n1,s1,r11/1:/Table/1{4-5}] change replicas (ADD_REPLICA (n2,s2):2): read existing descriptor r11:/Table/1{4-5} [(n1,s1):1, next=2, gen=842366007440]
I180819 19:26:03.330623 982 server/node.go:697  [n?] connecting to gossip network to verify cluster ID...
I180819 19:26:03.330892 982 server/node.go:722  [n?] node connected via gossip and verified as part of cluster "57064f73-a007-4a8c-940e-19e3fd7442da"
I180819 19:26:03.343003 2074 kv/dist_sender.go:345  [n?] unable to determine this node's attributes for replica selection; node is most likely bootstrapping
I180819 19:26:03.354113 2073 kv/dist_sender.go:345  [n?] unable to determine this node's attributes for replica selection; node is most likely bootstrapping
I180819 19:26:03.364761 2085 storage/replica.go:3749  [n1,s1,r11/1:/Table/1{4-5}] proposing ADD_REPLICA((n2,s2):2): updated=[(n1,s1):1 (n2,s2):2] next=3
I180819 19:26:03.368587 982 kv/dist_sender.go:345  [n?] unable to determine this node's attributes for replica selection; node is most likely bootstrapping
I180819 19:26:03.377775 982 server/node.go:428  [n?] new node allocated ID 3
I180819 19:26:03.378295 982 gossip/gossip.go:383  [n3] NodeDescriptor set to node_id:3 address:<network_field:"tcp" address_field:"127.0.0.1:43107" > attrs:<> locality:<> ServerVersion:<major_val:2 minor_val:0 patch:0 unstable:11 > build_tag:"v2.1.0-alpha.20180702-1775-g9146fae" started_at:1534706763377962859 
I180819 19:26:03.379056 982 storage/stores.go:242  [n3] read 0 node addresses from persistent storage
I180819 19:26:03.379348 982 storage/stores.go:261  [n3] wrote 2 node addresses to persistent storage
I180819 19:26:03.388443 2104 storage/stores.go:261  [n1] wrote 2 node addresses to persistent storage
I180819 19:26:03.409961 2170 storage/stores.go:261  [n2] wrote 2 node addresses to persistent storage
I180819 19:26:03.410315 2152 storage/replica_range_lease.go:554  [replicate,n1,s1,r11/1:/Table/1{4-5}] transferring lease to s2
I180819 19:26:03.432772 1815 storage/replica_proposal.go:214  [n2,s2,r11/2:/Table/1{4-5}] new range lease repl=(n2,s2):2 seq=3 start=1534706763.410447318,0 epo=1 pro=1534706763.410466151,0 following repl=(n1,s1):1 seq=2 start=1534706759.685018191,0 exp=1534706768.699386229,0 pro=1534706759.699433636,0
I180819 19:26:03.439162 2152 storage/replica_range_lease.go:617  [replicate,n1,s1,r11/1:/Table/1{4-5}] done transferring lease to s2: <nil>
I180819 19:26:03.464804 2121 storage/replica_raftstorage.go:529  [replicate,n1,s1,r3/1:/System/NodeLiveness{-Max}] generated preemptive snapshot f02f9a9e at index 24
I180819 19:26:03.487730 2049 storage/replica.go:3749  [n2,s2,r20/2:/Table/{23-50}] proposing REMOVE_REPLICA((n1,s1):1): updated=[(n2,s2):2] next=3
I180819 19:26:03.489237 2121 storage/store_snapshot.go:655  [replicate,n1,s1,r3/1:/System/NodeLiveness{-Max}] streamed snapshot to (n2,s2):?: kv pairs: 10, log entries: 14, rate-limit: 2.0 MiB/sec, 10ms
I180819 19:26:03.493408 2157 storage/replica_raftstorage.go:773  [n2,s2,r3/?:{-}] applying preemptive snapshot at index 24 (id=f02f9a9e, encoded size=5717, 1 rocksdb batches, 14 log entries)
I180819 19:26:03.500311 1948 storage/store.go:3691  [n1,s1,r20/1:/Table/{23-50}] added to replica GC queue (peer suggestion)
I180819 19:26:03.505284 2159 storage/store.go:2586  [replicaGC,n1,s1,r20/1:/Table/{23-50}] removing replica
I180819 19:26:03.507232 2159 storage/replica.go:880  [replicaGC,n1,s1,r20/1:/Table/{23-50}] removed 11 (4+7) keys in 1ms [clear=0ms commit=0ms]
I180819 19:26:03.509185 2246 storage/replica_command.go:792  [replicate,n2,s2,r22/2:/{Table/51-Max}] change replicas (REMOVE_REPLICA (n1,s1):1): read existing descriptor r22:/{Table/51-Max} [(n1,s1):1, (n2,s2):2, next=3, gen=0]
I180819 19:26:03.512196 2157 storage/replica_raftstorage.go:779  [n2,s2,r3/?:/System/NodeLiveness{-Max}] applied preemptive snapshot in 13ms [clear=0ms batch=0ms entries=11ms commit=0ms]
I180819 19:26:03.514787 982 server/node.go:672  [n3] bootstrapped store [n3,s3]
I180819 19:26:03.523104 982 server/node.go:546  [n3] node=3: started with [] engine(s) and attributes []
I180819 19:26:03.524515 982 server/status/recorder.go:652  [n3] available memory from cgroups (8.0 EiB) exceeds system memory 16 GiB, using system memory
I180819 19:26:03.525316 982 server/server.go:1806  [n3] Could not start heap profiler worker due to: directory to store profiles could not be determined
I180819 19:26:03.544967 2279 server/status/recorder.go:652  [n3,summaries] available memory from cgroups (8.0 EiB) exceeds system memory 16 GiB, using system memory
I180819 19:26:03.545909 982 server/server.go:1537  [n3] starting https server at 127.0.0.1:36967 (use: 127.0.0.1:36967)
I180819 19:26:03.550059 982 server/server.go:1539  [n3] starting grpc/postgres server at 127.0.0.1:43107
I180819 19:26:03.550288 982 server/server.go:1540  [n3] advertising CockroachDB node at 127.0.0.1:43107
I180819 19:26:03.556745 982 server/server.go:1593  [n3] done ensuring all necessary migrations have run
I180819 19:26:03.556893 982 server/server.go:1596  [n3] serving sql connections
I180819 19:26:03.561935 2121 storage/replica_command.go:792  [replicate,n1,s1,r3/1:/System/NodeLiveness{-Max}] change replicas (ADD_REPLICA (n2,s2):2): read existing descriptor r3:/System/NodeLiveness{-Max} [(n1,s1):1, next=2, gen=842354249288]
I180819 19:26:03.662013 2121 storage/replica.go:3749  [n1,s1,r3/1:/System/NodeLiveness{-Max}] proposing ADD_REPLICA((n2,s2):2): updated=[(n1,s1):1 (n2,s2):2] next=3
I180819 19:26:03.700520 2375 rpc/nodedialer/nodedialer.go:92  [ct-client] connection to n1 established
I180819 19:26:03.834292 2284 sql/event_log.go:126  [n3] Event: "node_join", target: 3, info: {Descriptor:{NodeID:3 Address:{NetworkField:tcp AddressField:127.0.0.1:43107} Attrs: Locality: ServerVersion:2.0-11 BuildTag:v2.1.0-alpha.20180702-1775-g9146fae StartedAt:1534706763377962859 LocalityAddress:[]} ClusterID:57064f73-a007-4a8c-940e-19e3fd7442da StartedAt:1534706763377962859 LastUp:1534706763377962859}
I180819 19:26:03.929490 2282 server/server_update.go:67  [n3] no need to upgrade, cluster already at the newest version
I180819 19:26:03.953296 2468 storage/replica_range_lease.go:554  [replicate,n1,s1,r3/1:/System/NodeLiveness{-Max}] transferring lease to s2
I180819 19:26:03.955759 2246 storage/replica.go:3749  [n2,s2,r22/2:/{Table/51-Max}] proposing REMOVE_REPLICA((n1,s1):1): updated=[(n2,s2):2] next=3
I180819 19:26:04.007714 2301 storage/store.go:2586  [replicaGC,n1,s1,r22/1:/{Table/51-Max}] removing replica
I180819 19:26:04.009134 2301 storage/replica.go:880  [replicaGC,n1,s1,r22/1:/{Table/51-Max}] removed 7 (0+7) keys in 0ms [clear=0ms commit=0ms]
I180819 19:26:04.024023 2358 rpc/nodedialer/nodedialer.go:92  [ct-client] connection to n3 established
I180819 19:26:04.033935 2468 storage/replica_range_lease.go:617  [replicate,n1,s1,r3/1:/System/NodeLiveness{-Max}] done transferring lease to s2: <nil>
I180819 19:26:04.035238 2374 rpc/nodedialer/nodedialer.go:92  [ct-client] connection to n2 established
I180819 19:26:04.045333 2413 storage/replica_raftstorage.go:529  [replicate,n1,s1,r15/1:/Table/1{8-9}] generated preemptive snapshot 405b3213 at index 18
I180819 19:26:04.057714 2327 rpc/nodedialer/nodedialer.go:92  [ct-client] connection to n3 established
I180819 19:26:04.058521 2413 rpc/nodedialer/nodedialer.go:92  [replicate,n1,s1,r15/1:/Table/1{8-9}] connection to n3 established
I180819 19:26:04.063380 2413 storage/store_snapshot.go:655  [replicate,n1,s1,r15/1:/Table/1{8-9}] streamed snapshot to (n3,s3):?: kv pairs: 8, log entries: 8, rate-limit: 2.0 MiB/sec, 4ms
I180819 19:26:04.065668 2500 storage/replica_raftstorage.go:773  [n3,s3,r15/?:{-}] applying preemptive snapshot at index 18 (id=405b3213, encoded size=2418, 1 rocksdb batches, 8 log entries)
I180819 19:26:04.068556 2500 storage/replica_raftstorage.go:779  [n3,s3,r15/?:/Table/1{8-9}] applied preemptive snapshot in 3ms [clear=0ms batch=0ms entries=1ms commit=1ms]
I180819 19:26:04.071763 2413 storage/replica_command.go:792  [replicate,n1,s1,r15/1:/Table/1{8-9}] change replicas (ADD_REPLICA (n3,s3):2): read existing descriptor r15:/Table/1{8-9} [(n1,s1):1, next=2, gen=842365987552]
I180819 19:26:04.104001 2408 storage/replica_command.go:792  [replicate,n2,s2,r11/2:/Table/1{4-5}] change replicas (REMOVE_REPLICA (n1,s1):1): read existing descriptor r11:/Table/1{4-5} [(n1,s1):1, (n2,s2):2, next=3, gen=842353032752]
I180819 19:26:04.105898 2437 sql/event_log.go:126  [n1,client=127.0.0.1:36838,user=root] Event: "create_database", target: 52, info: {DatabaseName:data Statement:CREATE DATABASE data User:root}
I180819 19:26:04.105941 2413 storage/replica.go:3749  [n1,s1,r15/1:/Table/1{8-9}] proposing ADD_REPLICA((n3,s3):2): updated=[(n1,s1):1 (n3,s3):2] next=3
I180819 19:26:04.193308 2401 storage/replica_command.go:282  [split,n2,s2,r22/2:/{Table/51-Max}] initiating a split of this range at key /Table/52 [r31]
I180819 19:26:04.228700 2463 storage/replica_range_lease.go:554  [replicate,n1,s1,r15/1:/Table/1{8-9}] transferring lease to s3
I180819 19:26:04.259717 2463 storage/replica_range_lease.go:617  [replicate,n1,s1,r15/1:/Table/1{8-9}] done transferring lease to s3: <nil>
I180819 19:26:04.260689 2437 sql/event_log.go:126  [n1,client=127.0.0.1:36838,user=root] Event: "set_cluster_setting", target: 0, info: {SettingName:server.declined_reservation_timeout Value:0s User:root}
I180819 19:26:04.262686 2183 storage/replica_proposal.go:214  [n3,s3,r15/2:/Table/1{8-9}] new range lease repl=(n3,s3):2 seq=3 start=1534706764.228860974,0 epo=1 pro=1534706764.228878350,0 following repl=(n1,s1):1 seq=2 start=1534706759.685018191,0 exp=1534706768.699386229,0 pro=1534706759.699433636,0
E180819 19:26:04.271743 2566 storage/queue.go:788  [replicate,n3,s3,r15/2:/Table/1{8-9}] no removable replicas from range that needs a removal: [raft progress unknown]
I180819 19:26:04.276851 2580 storage/replica_raftstorage.go:529  [replicate,n1,s1,r1/1:/{Min-System/}] generated preemptive snapshot 4300f1d5 at index 83
I180819 19:26:04.307149 2580 storage/store_snapshot.go:655  [replicate,n1,s1,r1/1:/{Min-System/}] streamed snapshot to (n3,s3):?: kv pairs: 61, log entries: 73, rate-limit: 2.0 MiB/sec, 12ms
I180819 19:26:04.310221 2569 storage/replica_raftstorage.go:773  [n3,s3,r1/?:{-}] applying preemptive snapshot at index 83 (id=4300f1d5, encoded size=19451, 1 rocksdb batches, 73 log entries)
I180819 19:26:04.332247 2569 storage/replica_raftstorage.go:779  [n3,s3,r1/?:/{Min-System/}] applied preemptive snapshot in 22ms [clear=0ms batch=0ms entries=20ms commit=1ms]
I180819 19:26:04.335639 2580 storage/replica_command.go:792  [replicate,n1,s1,r1/1:/{Min-System/}] change replicas (ADD_REPLICA (n3,s3):2): read existing descriptor r1:/{Min-System/} [(n1,s1):1, next=2, gen=842353391232]
I180819 19:26:04.373385 2408 storage/replica.go:3749  [n2,s2,r11/2:/Table/1{4-5}] proposing REMOVE_REPLICA((n1,s1):1): updated=[(n2,s2):2] next=3
I180819 19:26:04.382655 2627 storage/replica_command.go:792  [replicate,n3,s3,r15/2:/Table/1{8-9}] change replicas (REMOVE_REPLICA (n1,s1):1): read existing descriptor r15:/Table/1{8-9} [(n1,s1):1, (n3,s3):2, next=3, gen=842396675024]
I180819 19:26:04.393083 2437 sql/event_log.go:126  [n1,client=127.0.0.1:36838,user=root] Event: "set_cluster_setting", target: 0, info: {SettingName:server.failed_reservation_timeout Value:0s User:root}
I180819 19:26:04.415024 2580 storage/replica.go:3749  [n1,s1,r1/1:/{Min-System/}] proposing ADD_REPLICA((n3,s3):2): updated=[(n1,s1):1 (n3,s3):2] next=3
I180819 19:26:04.437094 1794 storage/replica_proposal.go:214  [n2,s2,r22/2:/{Table/51-Max}] new range lease repl=(n2,s2):1 seq=3 start=1534706763.261370304,0 epo=1 pro=1534706763.261383279,0 following repl=(n2,s2):1 seq=3 start=1534706763.261370304,0 epo=1 pro=1534706763.261383279,0
I180819 19:26:04.439146 1948 storage/store.go:3691  [n1,s1,r11/1:/Table/1{4-5}] added to replica GC queue (peer suggestion)
--- FAIL: testrace/TestInitialPartitioning: TestInitialPartitioning/TIMESTAMPTZ (46.040s)
soon.go:49: condition failed to evaluate within 45s: expected to scan on n2: SELECT count(*) FROM "TIMESTAMPTZ" WHERE a IN ('1970-01-05 07:59:40.000727+00:00':::TIMESTAMP WITH TIME ZONE)
    [n1,s1,r7/1:/Table/{SystemCon…-11}] read completed
    full trace:
       === SPAN START: session recording ===
      [n1,client=127.0.0.1:36838,user=root] [NoTxn pos:5750] executing ExecStmt: SELECT count(*) FROM "TIMESTAMPTZ" WHERE a IN ('1970-01-05 07:59:40.000727+00:00':::TIMESTAMP WITH TIME ZONE,)
      [n1,client=127.0.0.1:36838,user=root] executing: SELECT count(*) FROM "TIMESTAMPTZ" WHERE a IN ('1970-01-05 07:59:40.000727+00:00':::TIMESTAMP WITH TIME ZONE,) in state: NoTxn
       === SPAN START: sql txn ===
      [n1,client=127.0.0.1:36838,user=root] [Open pos:5750] executing ExecStmt: SELECT count(*) FROM "TIMESTAMPTZ" WHERE a IN ('1970-01-05 07:59:40.000727+00:00':::TIMESTAMP WITH TIME ZONE,)
      [n1,client=127.0.0.1:36838,user=root] executing: SELECT count(*) FROM "TIMESTAMPTZ" WHERE a IN ('1970-01-05 07:59:40.000727+00:00':::TIMESTAMP WITH TIME ZONE,) in state: Open
      [n1,client=127.0.0.1:36838,user=root] planning starts: SELECT
      [n1,client=127.0.0.1:36838,user=root] generating optimizer plan
       === SPAN START: txn coordinator send ===
       === SPAN START: dist sender send ===
      [client=127.0.0.1:36838,user=root,txn=c98f4936,n1] querying next range at /Table/2/1/52/"TIMESTAMPTZ"/3/1
      [client=127.0.0.1:36838,user=root,txn=c98f4936,n1] r7: sending batch 1 Get to (n1,s1):1
      [client=127.0.0.1:36838,user=root,txn=c98f4936,n1] sending request to local server
       === SPAN START: /cockroach.roachpb.Internal/Batch ===
      [n1] 1 Get
      [n1,s1] executing 1 requests
      [n1,s1,r7/1:/Table/{SystemCon…-11}] read-only path
      [n1,s1,r7/1:/Table/{SystemCon…-11}] command queue
      [n1,s1,r7/1:/Table/{SystemCon…-11}] waiting for read lock
      [n1,s1,r7/1:/Table/{SystemCon…-11}] read completed
       === SPAN START: txn coordinator send ===
      [n1,client=127.0.0.1:36838,user=root] client.Txn did AutoCommit. err: <nil>

      [n1,client=127.0.0.1:36838,user=root] added table 'data.public."TIMESTAMPTZ"' to table collection
      [n1,client=127.0.0.1:36838,user=root] optimizer plan succeeded
      [n1,client=127.0.0.1:36838,user=root] planning ends
      [n1,client=127.0.0.1:36838,user=root] checking distributability
      [n1,client=127.0.0.1:36838,user=root] distributable plan: true
      [n1,client=127.0.0.1:36838,user=root] execution starts: distributed
       === SPAN START: consuming rows ===
      [n1,client=127.0.0.1:36838,user=root] creating DistSQL plan with distributedMode=true
      [n1,client=127.0.0.1:36838,user=root] querying next range at /Table/92/1/1970-01-05T07:59:40.000727Z
      [n1,client=127.0.0.1:36838,user=root] running DistSQL plan
       === SPAN START: /cockroach.sql.distsqlrun.DistSQL/SetupFlow ===
       === SPAN START: outbox ===
    cockroach.stat.outbox.bytes_sent: ࡂ
    cockroach.streamid: 1
       === SPAN START: [async] drain ===
       === SPAN START: table reader ===
    cockroach.stat.tablereader.input.rows: 0
    cockroach.stat.tablereader.stalltime: 2µs
    cockroach.processorid: 0
      [n2] Scan /Table/92/1/1970-01-05T07:59:40.000727Z{-/#}
       === SPAN START: txn coordinator send ===
       === SPAN START: dist sender send ===
      [txn=19ac0eae,n2] querying next range at /Table/92/1/1970-01-05T07:59:40.000727Z
      [txn=19ac0eae,n2] r281: sending batch 1 Scan to (n2,s2):2
      [txn=19ac0eae,n2] sending request to local server
       === SPAN START: /cockroach.roachpb.Internal/Batch ===
      [n2] 1 Scan
      [n2,s2] executing 1 requests
      [n2,s2,r281/2:/Table/92/1/1970-01-05T07:59:…] read-only path
      [n2,s2,r281/2:/Table/92/1/1970-01-05T07:59:…] command queue
      [n2,s2,r281/2:/Table/92/1/1970-01-05T07:59:…] waiting for read lock
      [n2,s2,r281/2:/Table/92/1/1970-01-05T07:59:…] read completed
       === SPAN START: flow ===
      [n1,client=127.0.0.1:36838,user=root] starting (1 processors, 0 startables)
       === SPAN START: noop ===
    cockroach.processorid: 2
      [n2] Consumer sent handshake. Consuming flow scheduled: false
       === SPAN START: ordered aggregator ===
    cockroach.stat.aggregator.stalltime: 393µs
    cockroach.processorid: 1
    cockroach.stat.aggregator.input.rows: 0
    cockroach.stat.aggregator.mem.max: 10 KiB
      [n2] Consumer sent handshake. Consuming flow scheduled: true
      [n2] accumulation complete
      [n1,client=127.0.0.1:36838,user=root] execution ends
      [n1,client=127.0.0.1:36838,user=root] rows affected: 1
       === SPAN START: txn coordinator send ===
      [n1,client=127.0.0.1:36838,user=root] AutoCommit. err: <nil>
      [n1,client=127.0.0.1:36838,user=root] releasing 1 tables
      [n1,client=127.0.0.1:36838,user=root] [NoTxn pos:5751] executing ExecStmt: SET TRACING = off
      [n1,client=127.0.0.1:36838,user=root] executing: SET TRACING = off in state: NoTxn
    goroutine 34175 [running]:
    runtime/debug.Stack(0xa7a358200, 0xc421ff5260, 0x3da6020)
        /usr/local/go/src/runtime/debug/stack.go:24 +0xb5
    github.com/cockroachdb/cockroach/pkg/testutils.SucceedsSoon(0x3e07440, 0xc4232701e0, 0xc421ff5230)
        /go/src/github.com/cockroachdb/cockroach/pkg/testutils/soon.go:50 +0x172
    github.com/cockroachdb/cockroach/pkg/ccl/partitionccl.TestInitialPartitioning.func1(0xc4232701e0)
        /go/src/github.com/cockroachdb/cockroach/pkg/ccl/partitionccl/partition_test.go:1156 +0x258
    testing.tRunner(0xc4232701e0, 0xc42283f7a0)
        /usr/local/go/src/testing/testing.go:777 +0x16e
    created by testing.(*T).Run
        /usr/local/go/src/testing/testing.go:824 +0x565
------- Stdout: -------
I180819 19:28:27.972089 34143 storage/replica_command.go:282  [split,n3,s3,r278/1:/{Table/91/2-Max}] initiating a split of this range at key /Table/92 [r279]
I180819 19:28:28.000263 2437 sql/event_log.go:126  [n1,client=127.0.0.1:36838,user=root] Event: "create_table", target: 92, info: {TableName:data.public."TIMESTAMPTZ" Statement:CREATE TABLE "TIMESTAMPTZ" (a TIMESTAMP WITH TIME ZONE PRIMARY KEY) PARTITION BY LIST (a) (PARTITION p VALUES IN ('1970-01-05 07:59:40.000727+00:00':::TIMESTAMP WITH TIME ZONE)) User:root}
I180819 19:28:28.227505 2184 storage/replica_proposal.go:214  [n3,s3,r278/1:/{Table/91/2-Max}] new range lease repl=(n3,s3):1 seq=6 start=1534706845.619495547,0 epo=1 pro=1534706845.619524720,0 following repl=(n3,s3):1 seq=6 start=1534706845.619495547,0 epo=1 pro=1534706845.619524720,0
I180819 19:28:28.262532 2437 sql/event_log.go:126  [n1,client=127.0.0.1:36838,user=root] Event: "set_zone_config", target: 92, info: {Target:data."TIMESTAMPTZ"@primary Config:constraints: [+n1] User:root}
I180819 19:28:28.346983 34222 storage/replica_command.go:282  [split,n3,s3,r279/1:/{Table/92-Max}] initiating a split of this range at key /Table/92/1 [r280]
I180819 19:28:28.422519 2437 sql/event_log.go:126  [n1,client=127.0.0.1:36838,user=root] Event: "set_zone_config", target: 92, info: {Target:data."TIMESTAMPTZ".p Config:constraints: [+n2] User:root}
I180819 19:28:28.423583 34175 ccl/partitionccl/partition_test.go:200  query: SELECT count(*) FROM "TIMESTAMPTZ" WHERE a < '1970-01-05 07:59:40.000727+00:00':::TIMESTAMP WITH TIME ZONE
I180819 19:28:28.533032 2258 storage/replica_proposal.go:214  [n3,s3,r279/1:/{Table/92-Max}] new range lease repl=(n3,s3):1 seq=6 start=1534706845.619495547,0 epo=1 pro=1534706845.619524720,0 following repl=(n3,s3):1 seq=6 start=1534706845.619495547,0 epo=1 pro=1534706845.619524720,0
I180819 19:28:28.552608 34226 storage/replica_command.go:282  [split,n3,s3,r280/1:/{Table/92/1-Max}] initiating a split of this range at key /Table/92/1/1970-01-05T07:59:40.000727Z [r281]
I180819 19:28:28.761751 2218 storage/replica_proposal.go:214  [n3,s3,r280/1:/{Table/92/1-Max}] new range lease repl=(n3,s3):1 seq=6 start=1534706845.619495547,0 epo=1 pro=1534706845.619524720,0 following repl=(n3,s3):1 seq=6 start=1534706845.619495547,0 epo=1 pro=1534706845.619524720,0
I180819 19:28:28.768866 34327 storage/replica_raftstorage.go:529  [replicate,n3,s3,r280/1:/Table/92/1{-/1970-0…}] generated preemptive snapshot 2b966ef2 at index 16
I180819 19:28:28.769023 34371 storage/replica_command.go:282  [split,n3,s3,r281/1:/{Table/92/1/1…-Max}] initiating a split of this range at key /Table/92/1/1970-01-05T07:59:40.000727001Z [r282]
I180819 19:28:28.812244 34175 ccl/partitionccl/partition_test.go:200  query: SELECT count(*) FROM "TIMESTAMPTZ" WHERE a < '1970-01-05 07:59:40.000727+00:00':::TIMESTAMP WITH TIME ZONE
I180819 19:28:28.821633 34327 storage/store_snapshot.go:655  [replicate,n3,s3,r280/1:/Table/92/1{-/1970-0…}] streamed snapshot to (n1,s1):?: kv pairs: 8, log entries: 6, rate-limit: 2.0 MiB/sec, 52ms
I180819 19:28:28.836282 34375 storage/replica_raftstorage.go:773  [n1,s1,r280/?:{-}] applying preemptive snapshot at index 16 (id=2b966ef2, encoded size=2698, 1 rocksdb batches, 6 log entries)
I180819 19:28:28.840218 34375 storage/replica_raftstorage.go:779  [n1,s1,r280/?:/Table/92/1{-/1970-0…}] applied preemptive snapshot in 4ms [clear=0ms batch=0ms entries=2ms commit=0ms]
I180819 19:28:28.906511 34327 storage/replica_command.go:792  [replicate,n3,s3,r280/1:/Table/92/1{-/1970-0…}] change replicas (ADD_REPLICA (n1,s1):2): read existing descriptor r280:/Table/92/1{-/1970-01-05T07:59:40.000727Z} [(n3,s3):1, next=2, gen=842409027200]
I180819 19:28:28.980585 34175 ccl/partitionccl/partition_test.go:200  query: SELECT count(*) FROM "TIMESTAMPTZ" WHERE a > '1970-01-05 07:59:40.000727+00:00':::TIMESTAMP WITH TIME ZONE
I180819 19:28:29.045805 34395 storage/replica_command.go:792  [replicate,n2,s2,r276/2:/Table/91/1/1970-01-06T00:42:…] change replicas (REMOVE_REPLICA (n3,s3):1): read existing descriptor r276:/Table/91/1/1970-01-06T00:42:01.00044{Z-0001Z} [(n3,s3):1, (n2,s2):2, next=3, gen=842409583480]
I180819 19:28:29.108620 34175 ccl/partitionccl/partition_test.go:200  query: SELECT count(*) FROM "TIMESTAMPTZ" WHERE a < '1970-01-05 07:59:40.000727+00:00':::TIMESTAMP WITH TIME ZONE
I180819 19:28:29.144459 34327 storage/replica.go:3749  [n3,s3,r280/1:/Table/92/1{-/1970-0…}] proposing ADD_REPLICA((n1,s1):2): updated=[(n3,s3):1 (n1,s1):2] next=3
I180819 19:28:29.152196 34175 ccl/partitionccl/partition_test.go:200  query: SELECT count(*) FROM "TIMESTAMPTZ" WHERE a < '1970-01-05 07:59:40.000727+00:00':::TIMESTAMP WITH TIME ZONE
I180819 19:28:29.173759 2241 storage/replica_proposal.go:214  [n3,s3,r281/1:/{Table/92/1/1…-Max}] new range lease repl=(n3,s3):1 seq=6 start=1534706845.619495547,0 epo=1 pro=1534706845.619524720,0 following repl=(n3,s3):1 seq=6 start=1534706845.619495547,0 epo=1 pro=1534706845.619524720,0
I180819 19:28:29.181207 34443 storage/replica_command.go:282  [split,n3,s3,r282/1:/{Table/92/1/1…-Max}] initiating a split of this range at key /Table/92/2 [r283]
I180819 19:28:29.218347 34470 storage/replica_range_lease.go:554  [replicate,n3,s3,r280/1:/Table/92/1{-/1970-0…}] transferring lease to s1
I180819 19:28:29.233159 1167 storage/replica_proposal.go:214  [n1,s1,r280/2:/Table/92/1{-/1970-0…}] new range lease repl=(n1,s1):2 seq=7 start=1534706909.218545483,0 epo=1 pro=1534706909.218571954,0 following repl=(n3,s3):1 seq=6 start=1534706845.619495547,0 epo=1 pro=1534706845.619524720,0
I180819 19:28:29.246994 34175 ccl/partitionccl/partition_test.go:200  query: SELECT count(*) FROM "TIMESTAMPTZ" WHERE a > '1970-01-05 07:59:40.000727+00:00':::TIMESTAMP WITH TIME ZONE
I180819 19:28:29.255892 34395 storage/replica.go:3749  [n2,s2,r276/2:/Table/91/1/1970-01-06T00:42:…] proposing REMOVE_REPLICA((n3,s3):1): updated=[(n2,s2):2] next=3
I180819 19:28:29.260274 34470 storage/replica_range_lease.go:617  [replicate,n3,s3,r280/1:/Table/92/1{-/1970-0…}] done transferring lease to s1: <nil>
I180819 19:28:29.264440 34412 storage/replica_raftstorage.go:529  [replicate,n3,s3,r281/1:/Table/92/1/1970-01-05T07:59:…] generated preemptive snapshot 49f2bb7b at index 16
I180819 19:28:29.273071 3403 storage/store.go:3691  [n3,s3,r276/1:/Table/91/1/1970-01-06T00:42:…] added to replica GC queue (peer suggestion)
I180819 19:28:29.287861 34515 storage/store.go:2586  [replicaGC,n3,s3,r276/1:/Table/91/1/1970-01-06T00:42:…] removing replica
I180819 19:28:29.326266 34412 storage/store_snapshot.go:655  [replicate,n3,s3,r281/1:/Table/92/1/1970-01-05T07:59:…] streamed snapshot to (n2,s2):?: kv pairs: 7, log entries: 6, rate-limit: 2.0 MiB/sec, 59ms
I180819 19:28:29.329889 34519 storage/replica_raftstorage.go:773  [n2,s2,r281/?:{-}] applying preemptive snapshot at index 16 (id=49f2bb7b, encoded size=2189, 1 rocksdb batches, 6 log entries)
I180819 19:28:29.332588 34515 storage/replica.go:880  [replicaGC,n3,s3,r276/1:/Table/91/1/1970-01-06T00:42:…] removed 6 (0+6) keys in 1ms [clear=0ms commit=0ms]
I180819 19:28:29.338384 34175 ccl/partitionccl/partition_test.go:200  query: SELECT count(*) FROM "TIMESTAMPTZ" WHERE a < '1970-01-05 07:59:40.000727+00:00':::TIMESTAMP WITH TIME ZONE
I180819 19:28:29.339222 34519 storage/replica_raftstorage.go:779  [n2,s2,r281/?:/Table/92/1/1970-01-05T07:59:…] applied preemptive snapshot in 9ms [clear=0ms batch=0ms entries=1ms commit=7ms]
I180819 19:28:29.346024 34412 storage/replica_command.go:792  [replicate,n3,s3,r281/1:/Table/92/1/1970-01-05T07:59:…] change replicas (ADD_REPLICA (n2,s2):2): read existing descriptor r281:/Table/92/1/1970-01-05T07:59:40.000727{Z-001Z} [(n3,s3):1, next=2, gen=842365988744]
I180819 19:28:29.444716 2221 storage/replica_proposal.go:214  [n3,s3,r282/1:/{Table/92/1/1…-Max}] new range lease repl=(n3,s3):1 seq=6 start=1534706845.619495547,0 epo=1 pro=1534706845.619524720,0 following repl=(n3,s3):1 seq=6 start=1534706845.619495547,0 epo=1 pro=1534706845.619524720,0
I180819 19:28:29.584506 34412 storage/replica.go:3749  [n3,s3,r281/1:/Table/92/1/1970-01-05T07:59:…] proposing ADD_REPLICA((n2,s2):2): updated=[(n3,s3):1 (n2,s2):2] next=3
I180819 19:28:29.658853 34175 ccl/partitionccl/partition_test.go:200  query: SELECT count(*) FROM "TIMESTAMPTZ" WHERE a > '1970-01-05 07:59:40.000727+00:00':::TIMESTAMP WITH TIME ZONE
I180819 19:28:29.677350 34566 storage/replica_range_lease.go:554  [replicate,n3,s3,r281/1:/Table/92/1/1970-01-05T07:59:…] transferring lease to s2
I180819 19:28:29.745092 1234 server/status/runtime.go:433  [n1] runtime stats: 1.0 GiB RSS, 652 goroutines, 40 MiB/25 MiB/89 MiB GO alloc/idle/total, 103 MiB/143 MiB CGO alloc/total, 3222.23cgo/sec, 1.73/0.09 %(u/s)time, 0.01 %gc (12x)
I180819 19:28:29.750463 34566 storage/replica_range_lease.go:617  [replicate,n3,s3,r281/1:/Table/92/1/1970-01-05T07:59:…] done transferring lease to s2: <nil>
I180819 19:28:29.763632 1823 storage/replica_proposal.go:214  [n2,s2,r281/2:/Table/92/1/1970-01-05T07:59:…] new range lease repl=(n2,s2):2 seq=7 start=1534706909.677544791,0 epo=1 pro=1534706909.677571786,0 following repl=(n3,s3):1 seq=6 start=1534706845.619495547,0 epo=1 pro=1534706845.619524720,0
I180819 19:28:29.765202 34538 storage/replica_raftstorage.go:529  [replicate,n3,s3,r282/1:/Table/92/{1/1970-…-2}] generated preemptive snapshot de3b6ce3 at index 16
I180819 19:28:29.785354 34538 storage/store_snapshot.go:655  [replicate,n3,s3,r282/1:/Table/92/{1/1970-…-2}] streamed snapshot to (n1,s1):?: kv pairs: 7, log entries: 6, rate-limit: 2.0 MiB/sec, 15ms
I180819 19:28:29.801137 34541 storage/replica_raftstorage.go:773  [n1,s1,r282/?:{-}] applying preemptive snapshot at index 16 (id=de3b6ce3, encoded size=2108, 1 rocksdb batches, 6 log entries)
I180819 19:28:29.803471 34541 storage/replica_raftstorage.go:779  [n1,s1,r282/?:/Table/92/{1/1970-…-2}] applied preemptive snapshot in 2ms [clear=0ms batch=0ms entries=1ms commit=0ms]
I180819 19:28:29.833790 34538 storage/replica_command.go:792  [replicate,n3,s3,r282/1:/Table/92/{1/1970-…-2}] change replicas (ADD_REPLICA (n1,s1):2): read existing descriptor r282:/Table/92/{1/1970-01-05T07:59:40.000727001Z-2} [(n3,s3):1, next=2, gen=842401119232]
I180819 19:28:29.932807 34175 ccl/partitionccl/partition_test.go:200  query: SELECT count(*) FROM "TIMESTAMPTZ" WHERE a IN ('1970-01-05 07:59:40.000727+00:00':::TIMESTAMP WITH TIME ZONE)
I180819 19:28:29.978478 34652 storage/replica_command.go:792  [replicate,n1,s1,r277/2:/Table/91/{1/1970-…-2}] change replicas (REMOVE_REPLICA (n3,s3):1): read existing descriptor r277:/Table/91/{1/1970-01-06T00:42:01.000440001Z-2} [(n3,s3):1, (n1,s1):2, next=3, gen=842360848168]
I180819 19:28:30.178823 1236 server/status/recorder.go:652  [n1,summaries] available memory from cgroups (8.0 EiB) exceeds system memory 16 GiB, using system memory
I180819 19:28:30.517364 34175 ccl/partitionccl/partition_test.go:200  query: SELECT count(*) FROM "TIMESTAMPTZ" WHERE a < '1970-01-05 07:59:40.000727+00:00':::TIMESTAMP WITH TIME ZONE
I180819 19:28:30.520756 34652 storage/replica.go:3749  [n1,s1,r277/2:/Table/91/{1/1970-…-2}] proposing REMOVE_REPLICA((n3,s3):1): updated=[(n1,s1):2] next=3
I180819 19:28:30.554305 34175 ccl/partitionccl/partition_test.go:200  query: SELECT count(*) FROM "TIMESTAMPTZ" WHERE a = '1970-01-05 07:59:40.000727+00:00':::TIMESTAMP WITH TIME ZONE
I180819 19:28:30.611948 2562 storage/store.go:3691  [n3,s3,r277/1:/Table/91/{1/1970-…-2}] added to replica GC queue (peer suggestion)
I180819 19:28:30.685493 34175 ccl/partitionccl/partition_test.go:200  query: SELECT count(*) FROM "TIMESTAMPTZ" WHERE a < '1970-01-05 07:59:40.000727+00:00':::TIMESTAMP WITH TIME ZONE
I180819 19:28:30.689760 34823 storage/store.go:2586  [replicaGC,n3,s3,r277/1:/Table/91/{1/1970-…-2}] removing replica
I180819 19:28:30.692953 34823 storage/replica.go:880  [replicaGC,n3,s3,r277/1:/Table/91/{1/1970-…-2}] removed 7 (0+7) keys in 1ms [clear=0ms commit=0ms]
I180819 19:28:30.707268 34538 storage/replica.go:3749  [n3,s3,r282/1:/Table/92/{1/1970-…-2}] proposing ADD_REPLICA((n1,s1):2): updated=[(n3,s3):1 (n1,s1):2] next=3
I180819 19:28:30.818399 34802 storage/replica_range_lease.go:554  [replicate,n3,s3,r282/1:/Table/92/{1/1970-…-2}] transferring lease to s1
I180819 19:28:30.854835 34175 ccl/partitionccl/partition_test.go:200  query: SELECT count(*) FROM "TIMESTAMPTZ" WHERE a = '1970-01-05 07:59:40.000727+00:00':::TIMESTAMP WITH TIME ZONE
I180819 19:28:30.887921 1139 storage/replica_proposal.go:214  [n1,s1,r282/2:/Table/92/{1/1970-…-2}] new range lease repl=(n1,s1):2 seq=7 start=1534706910.818581553,0 epo=1 pro=1534706910.818608524,0 following repl=(n3,s3):1 seq=6 start=1534706845.619495547,0 epo=1 pro=1534706845.619524720,0
I180819 19:28:30.900590 34802 storage/replica_range_lease.go:617  [replicate,n3,s3,r282/1:/Table/92/{1/1970-…-2}] done transferring lease to s1: <nil>
I180819 19:28:31.011604 34175 ccl/partitionccl/partition_test.go:200  query: SELECT count(*) FROM "TIMESTAMPTZ" WHERE a IN ('1970-01-05 07:59:40.000727+00:00':::TIMESTAMP WITH TIME ZONE)
I180819 19:28:31.281457 34175 ccl/partitionccl/partition_test.go:200  query: SELECT count(*) FROM "TIMESTAMPTZ" WHERE a = '1970-01-05 07:59:40.000727+00:00':::TIMESTAMP WITH TIME ZONE
I180819 19:28:31.348018 34980 storage/replica_command.go:792  [replicate,n2,s2,r281/2:/Table/92/1/1970-01-05T07:59:…] change replicas (REMOVE_REPLICA (n3,s3):1): read existing descriptor r281:/Table/92/1/1970-01-05T07:59:40.000727{Z-001Z} [(n3,s3):1, (n2,s2):2, next=3, gen=842366502152]
I180819 19:28:31.464573 34175 ccl/partitionccl/partition_test.go:1156  SucceedsSoon: expected to scan on n2: SELECT count(*) FROM "TIMESTAMPTZ" WHERE a = '1970-01-05 07:59:40.000727+00:00':::TIMESTAMP WITH TIME ZONE
[n1,s1,r7/1:/Table/{SystemCon…-11}] read completed
full trace:
   === SPAN START: session recording ===
  [n1,client=127.0.0.1:36838,user=root] [NoTxn pos:5210] executing ExecStmt: SELECT count(*) FROM "TIMESTAMPTZ" WHERE a = '1970-01-05 07:59:40.000727+00:00':::TIMESTAMP WITH TIME ZONE
  [n1,client=127.0.0.1:36838,user=root] executing: SELECT count(*) FROM "TIMESTAMPTZ" WHERE a = '1970-01-05 07:59:40.000727+00:00':::TIMESTAMP WITH TIME ZONE in state: NoTxn
   === SPAN START: sql txn ===
  [n1,client=127.0.0.1:36838,user=root] [Open pos:5210] executing ExecStmt: SELECT count(*) FROM "TIMESTAMPTZ" WHERE a = '1970-01-05 07:59:40.000727+00:00':::TIMESTAMP WITH TIME ZONE
  [n1,client=127.0.0.1:36838,user=root] executing: SELECT count(*) FROM "TIMESTAMPTZ" WHERE a = '1970-01-05 07:59:40.000727+00:00':::TIMESTAMP WITH TIME ZONE in state: Open
  [n1,client=127.0.0.1:36838,user=root] planning starts: SELECT
  [n1,client=127.0.0.1:36838,user=root] generating optimizer plan
   === SPAN START: txn coordinator send ===
   === SPAN START: dist sender send ===
  [client=127.0.0.1:36838,user=root,txn=429e7d78,n1] querying next range at /Table/2/1/52/"TIMESTAMPTZ"/3/1
  [client=127.0.0.1:36838,user=root,txn=429e7d78,n1] r7: sending batch 1 Get to (n1,s1):1
  [client=127.0.0.1:36838,user=root,txn=429e7d78,n1] sending request to local server
   === SPAN START: /cockroach.roachpb.Internal/Batch ===
  [n1] 1 Get
  [n1,s1] executing 1 requests
  [n1,s1,r7/1:/Table/{SystemCon…-11}] read-only path
  [n1,s1,r7/1:/Table/{SystemCon…-11}] command queue
  [n1,s1,r7/1:/Table/{SystemCon…-11}] waiting for read lock
  [n1,s1,r7/1:/Table/{SystemCon…-11}] read completed
   === SPAN START: txn coordinator send ===
  [n1,client=127.0.0.1:36838,user=root] client.Txn did AutoCommit. err: <nil>

  [n1,client=127.0.0.1:36838,user=root] added table 'data.public."TIMESTAMPTZ"' to table collection
  [n1,client=127.0.0.1:36838,user=root] optimizer plan succeeded
  [n1,client=127.0.0.1:36838,user=root] planning ends
  [n1,client=127.0.0.1:36838,user=root] checking distributability
  [n1,client=127.0.0.1:36838,user=root] distributable plan: true
  [n1,client=127.0.0.1:36838,user=root] execution starts: distributed
   === SPAN START: consuming rows ===
  [n1,client=127.0.0.1:36838,user=root] creating DistSQL plan with distributedMode=true
  [n1,client=127.0.0.1:36838,user=root] querying next range at /Table/92/1/1970-01-05T07:59:40.000727Z
  [n1,client=127.0.0.1:36838,user=root] running DistSQL plan
   === SPAN START: /cockroach.sql.distsqlrun.DistSQL/SetupFlow ===
   === SPAN START: outbox ===
cockroach.stat.outbox.bytes_sent: ࠿
cockroach.streamid: 1
   === SPAN START: [async] drain ===
   === SPAN START: table reader ===
cockroach.processorid: 0
cockroach.stat.tablereader.input.rows: 0
cockroach.stat.tablereader.stalltime: 1µs
  [n2] Scan /Table/92/1/1970-01-05T07:59:40.000727Z{-/#}
   === SPAN START: txn coordinator send ===
   === SPAN START: dist sender send ===
  [txn=146464fe,n2] querying next range at /Table/92/1/1970-01-05T07:59:40.000727Z
  [txn=146464fe,n2] r281: sending batch 1 Scan to (n2,s2):2
  [txn=146464fe,n2] sending request to local server
   === SPAN START: /cockroach.roachpb.Internal/Batch ===
  [n2] 1 Scan
  [n2,s2] executing 1 requests
  [n2,s2,r281/2:/Table/92/1/1970-01-05T07:59:…] read-only path
  [n2,s2,r281/2:/Table/92/1/1970-01-05T07:59:…] command queue
  [n2,s2,r281/2:/Table/92/1/1970-01-05T07:59:…] waiting for read lock
  [n2,s2,r281/2:/Table/92/1/1970-01-05T07:59:…] read completed
   === SPAN START: ordered aggregator ===
cockroach.processorid: 1
cockroach.stat.aggregator.input.rows: 0
cockroach.stat.aggregator.mem.max: 10 KiB
cockroach.stat.aggregator.stalltime: 251µs
  [n2] accumulation complete
   === SPAN START: flow ===
  [n1,client=127.0.0.1:36838,user=root] starting (1 processors, 0 startables)
   === SPAN START: noop ===
cockroach.processorid: 2
  [n1,client=127.0.0.1:36838,user=root] execution ends
  [n1,client=127.0.0.1:36838,user=root] rows affected: 1
   === SPAN START: txn coordinator send ===
  [n1,client=127.0.0.1:36838,user=root] AutoCommit. err: <nil>
  [n1,client=127.0.0.1:36838,user=root] releasing 1 tables
  [n1,client=127.0.0.1:36838,user=root] [NoTxn pos:5211] executing ExecStmt: SET TRACING = off
  [n1,client=127.0.0.1:36838,user=root] executing: SET TRACING = off in state: NoTxn
I180819 19:28:31.469247 34175 ccl/partitionccl/partition_test.go:200  query: SELECT count(*) FROM "TIMESTAMPTZ" WHERE a = '1970-01-05 07:59:40.000727+00:00':::TIMESTAMP WITH TIME ZONE
I180819 19:28:31.600914 34175 ccl/partitionccl/partition_test.go:1156  SucceedsSoon: expected to scan on n2: SELECT count(*) FROM "TIMESTAMPTZ" WHERE a = '1970-01-05 07:59:40.000727+00:00':::TIMESTAMP WITH TIME ZONE
[n1,s1,r7/1:/Table/{SystemCon…-11}] read completed
full trace:
   === SPAN START: session recording ===
  [n1,client=127.0.0.1:36838,user=root] [NoTxn pos:5216] executing ExecStmt: SELECT count(*) FROM "TIMESTAMPTZ" WHERE a = '1970-01-05 07:59:40.000727+00:00':::TIMESTAMP WITH TIME ZONE
  [n1,client=127.0.0.1:36838,user=root] executing: SELECT count(*) FROM "TIMESTAMPTZ" WHERE a = '1970-01-05 07:59:40.000727+00:00':::TIMESTAMP WITH TIME ZONE in state: NoTxn
   === SPAN START: sql txn ===
  [n1,client=127.0.0.1:36838,user=root] [Open pos:5216] executing ExecStmt: SELECT count(*) FROM "TIMESTAMPTZ" WHERE a = '1970-01-05 07:59:40.000727+00:00':::TIMESTAMP WITH TIME ZONE
  [n1,client=127.0.0.1:36838,user=root] executing: SELECT count(*) FROM "TIMESTAMPTZ" WHERE a = '1970-01-05 07:59:40.000727+00:00':::TIMESTAMP WITH TIME ZONE in state: Open
  [n1,client=127.0.0.1:36838,user=root] planning starts: SELECT
  [n1,client=127.0.0.1:36838,user=root] generating optimizer plan
   === SPAN START: txn coordinator send ===
   === SPAN START: dist sender send ===
  [client=127.0.0.1:36838,user=root,txn=8543aef0,n1] querying next range at /Table/2/1/52/"TIMESTAMPTZ"/3/1
  [client=127.0.0.1:36838,user=root,txn=8543aef0,n1] r7: sending batch 1 Get to (n1,s1):1
  [client=127.0.0.1:36838,user=root,txn=8543aef0,n1] sending request to local server
   === SPAN START: /cockroach.roachpb.Internal/Batch ===
  [n1] 1 Get
  [n1,s1] executing 1 requests
  [n1,s1,r7/1:/Table/{SystemCon…-11}] read-only path
  [n1,s1,r7/1:/Table/{SystemCon…-11}] command queue
  [n1,s1,r7/1:/Table/{SystemCon…-11}] waiting for read lock
  [n1,s1,r7/1:/Table/{SystemCon…-11}] read completed
   === SPAN START: txn coordinator send ===
  [n1,client=127.0.0.1:36838,user=root] client.Txn did AutoCommit. err: <nil>

  [n1,client=127.0.0.1:36838,user=root] added table 'data.public."TIMESTAMPTZ"' to table collection
  [n1,client=127.0.0.1:36838,user=root] optimizer plan succeeded
  [n1,client=127.0.0.1:36838,user=root] planning ends
  [n1,client=127.0.0.1:36838,user=root] checking distributability
  [n1,client=127.0.0.1:36838,user=root] distributable plan: true
  [n1,client=127.0.0.1:36838,user=root] execution starts: distributed
   === SPAN START: consuming rows ===
  [n1,client=127.0.0.1:36838,user=root] creating DistSQL plan with distributedMode=true
  [n1,client=127.0.0.1:36838,user=root] querying next range at /Table/92/1/1970-01-05T07:59:40.000727Z
  [n1,client=127.0.0.1:36838,user=root] running DistSQL plan
   === SPAN START: /cockroach.sql.distsqlrun.DistSQL/SetupFlow ===
   === SPAN START: outbox ===
cockroach.streamid: 1
cockroach.stat.outbox.bytes_sent: ࡂ
   === SPAN START: [async] drain ===
   === SPAN START: table reader ===
cockroach.stat.tablereader.stalltime: 1µs
cockroach.processorid: 0
cockroach.stat.tablereader.input.rows: 0
  [n2] Scan /Table/92/1/1970-01-05T07:59:40.000727Z{-/#}
   === SPAN START: txn coordinator send ===
   === SPAN START: dist sender send ===
  [txn=0b3319ad,n2] querying next range at /Table/92/1/1970-01-05T07:59:40.000727Z
   === SPAN START: flow ===
  [n1,client=127.0.0.1:36838,user=root] starting (1 processors, 0 startables)
   === SPAN START: noop ===
cockroach.processorid: 2
  [txn=0b3319ad,n2] r281: sending batch 1 Scan to (n2,s2):2
  [txn=0b3319ad,n2] sending request to local server
   === SPAN START: /cockroach.roachpb.Internal/Batch ===
  [n2] 1 Scan
  [n2,s2] executing 1 requests
  [n2,s2,r281/2:/Table/92/1/1970-01-05T07:59:…] read-only path
  [n2,s2,r281/2:/Table/92/1/1970-01-05T07:59:…] comman

Please assign, take a look and update the issue accordingly.

a-robinson commented 6 years ago

Dupe of https://github.com/cockroachdb/cockroach/issues/28786