cockroachdb / cockroach

CockroachDB — the cloud native, distributed SQL database designed for high availability, effortless scale, and control over data placement.
https://www.cockroachlabs.com
Other
30.01k stars 3.79k forks source link

teamcity: failed tests on master: test/TestInitialPartitioning, testrace/TestInitialPartitioning #28804

Closed cockroach-teamcity closed 6 years ago

cockroach-teamcity commented 6 years ago

The following tests appear to have failed:

#846918:

--- FAIL: testrace/TestInitialPartitioning (239.020s)

------- Stdout: -------
W180819 01:23:34.265322 879 server/status/runtime.go:294  [n?] Could not parse build timestamp: parsing time "" as "2006/01/02 15:04:05": cannot parse "" as "2006"
I180819 01:23:34.342100 879 server/server.go:829  [n?] monitoring forward clock jumps based on server.clock.forward_jump_check_enabled
I180819 01:23:34.342686 879 base/addr_validation.go:260  [n?] server certificate addresses: IP=127.0.0.1,::1; DNS=localhost,*.local; CN=node
I180819 01:23:34.342802 879 base/addr_validation.go:300  [n?] web UI certificate addresses: IP=127.0.0.1,::1; DNS=localhost,*.local; CN=node
I180819 01:23:34.371030 879 server/config.go:496  [n?] 1 storage engine initialized
I180819 01:23:34.371181 879 server/config.go:499  [n?] RocksDB cache size: 128 MiB
I180819 01:23:34.371267 879 server/config.go:499  [n?] store 0: in-memory, size 0 B
I180819 01:23:34.454327 879 server/node.go:373  [n?] **** cluster 45667a79-ab72-4df5-9c00-182ba82cedf6 has been created
I180819 01:23:34.454510 879 server/server.go:1400  [n?] **** add additional nodes by specifying --join=127.0.0.1:36843
I180819 01:23:34.455448 879 gossip/gossip.go:383  [n1] NodeDescriptor set to node_id:1 address:<network_field:"tcp" address_field:"127.0.0.1:36843" > attrs:<> locality:<> ServerVersion:<major_val:2 minor_val:0 patch:0 unstable:11 > build_tag:"v2.1.0-alpha.20180702-1767-gdabd49b" started_at:1534641814455141725 
I180819 01:23:34.495013 879 storage/store.go:1509  [n1,s1] [n1,s1]: failed initial metrics computation: [n1,s1]: system config not yet available
I180819 01:23:34.495833 879 server/node.go:476  [n1] initialized store [n1,s1]: disk (capacity=512 MiB, available=512 MiB, used=0 B, logicalBytes=6.9 KiB), ranges=1, leases=1, queries=0.00, writes=0.00, bytesPerReplica={p10=7103.00 p25=7103.00 p50=7103.00 p75=7103.00 p90=7103.00 pMax=7103.00}, writesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}
I180819 01:23:34.496160 879 storage/stores.go:242  [n1] read 0 node addresses from persistent storage
I180819 01:23:34.496531 879 server/node.go:697  [n1] connecting to gossip network to verify cluster ID...
I180819 01:23:34.496660 879 server/node.go:722  [n1] node connected via gossip and verified as part of cluster "45667a79-ab72-4df5-9c00-182ba82cedf6"
I180819 01:23:34.497062 879 server/node.go:546  [n1] node=1: started with [n1=<in-mem>] engine(s) and attributes []
I180819 01:23:34.498148 879 server/status/recorder.go:652  [n1] available memory from cgroups (8.0 EiB) exceeds system memory 16 GiB, using system memory
I180819 01:23:34.498216 879 server/server.go:1806  [n1] Could not start heap profiler worker due to: directory to store profiles could not be determined
I180819 01:23:34.527442 879 server/server.go:1537  [n1] starting https server at 127.0.0.1:45655 (use: 127.0.0.1:45655)
I180819 01:23:34.527560 879 server/server.go:1539  [n1] starting grpc/postgres server at 127.0.0.1:36843
I180819 01:23:34.527620 879 server/server.go:1540  [n1] advertising CockroachDB node at 127.0.0.1:36843
W180819 01:23:34.527838 879 jobs/registry.go:308  [n1] unable to get node liveness: node not in the liveness table
I180819 01:23:34.535276 1237 server/status/recorder.go:652  [n1,summaries] available memory from cgroups (8.0 EiB) exceeds system memory 16 GiB, using system memory
I180819 01:23:34.575713 1270 storage/replica_command.go:282  [split,n1,s1,r1/1:/M{in-ax}] initiating a split of this range at key /System/"" [r2]
I180819 01:23:34.646209 1055 storage/replica_command.go:282  [split,n1,s1,r2/1:/{System/-Max}] initiating a split of this range at key /System/NodeLiveness [r3]
I180819 01:23:34.837231 987 rpc/nodedialer/nodedialer.go:92  [consistencyChecker,n1,s1,r1/1:/{Min-System/}] connection to n1 established
I180819 01:23:34.892951 1052 sql/event_log.go:126  [n1,intExec=optInToDiagnosticsStatReporting] Event: "set_cluster_setting", target: 0, info: {SettingName:diagnostics.reporting.enabled Value:true User:root}
I180819 01:23:35.043550 1284 storage/replica_command.go:282  [split,n1,s1,r3/1:/{System/NodeL…-Max}] initiating a split of this range at key /System/NodeLivenessMax [r4]
I180819 01:23:35.154764 1022 storage/replica_command.go:282  [split,n1,s1,r4/1:/{System/NodeL…-Max}] initiating a split of this range at key /System/tsd [r5]
I180819 01:23:35.237091 1308 storage/replica_command.go:282  [split,n1,s1,r5/1:/{System/tsd-Max}] initiating a split of this range at key /System/"tse" [r6]
W180819 01:23:35.245566 1339 storage/intent_resolver.go:668  [n1,s1] failed to push during intent resolution: failed to push "sql txn" id=db039a77 key=/Table/SystemConfigSpan/Start rw=true pri=0.00063768 iso=SERIALIZABLE stat=PENDING epo=0 ts=1534641815.112897862,0 orig=1534641815.112897862,0 max=1534641815.112897862,0 wto=false rop=false seq=12
I180819 01:23:35.324046 1332 sql/event_log.go:126  [n1,intExec=set-setting] Event: "set_cluster_setting", target: 0, info: {SettingName:version Value:$1 User:root}
I180819 01:23:35.405629 1363 storage/replica_command.go:282  [split,n1,s1,r6/1:/{System/tse-Max}] initiating a split of this range at key /Table/SystemConfigSpan/Start [r7]
I180819 01:23:35.508310 1313 storage/replica_command.go:282  [split,n1,s1,r7/1:/{Table/System…-Max}] initiating a split of this range at key /Table/11 [r8]
I180819 01:23:35.529330 1274 sql/event_log.go:126  [n1,intExec=disableNetTrace] Event: "set_cluster_setting", target: 0, info: {SettingName:trace.debug.enable Value:false User:root}
I180819 01:23:35.603115 1410 storage/replica_command.go:282  [split,n1,s1,r8/1:/{Table/11-Max}] initiating a split of this range at key /Table/12 [r9]
I180819 01:23:35.651267 1431 storage/replica_command.go:282  [split,n1,s1,r9/1:/{Table/12-Max}] initiating a split of this range at key /Table/13 [r10]
I180819 01:23:35.691189 1388 storage/replica_command.go:282  [split,n1,s1,r10/1:/{Table/13-Max}] initiating a split of this range at key /Table/14 [r11]
I180819 01:23:35.753128 1434 storage/replica_command.go:282  [split,n1,s1,r11/1:/{Table/14-Max}] initiating a split of this range at key /Table/15 [r12]
I180819 01:23:35.836269 1323 sql/event_log.go:126  [n1,intExec=initializeClusterSecret] Event: "set_cluster_setting", target: 0, info: {SettingName:cluster.secret Value:gen_random_uuid()::STRING User:root}
I180819 01:23:35.839293 1442 storage/replica_command.go:282  [split,n1,s1,r12/1:/{Table/15-Max}] initiating a split of this range at key /Table/16 [r13]
I180819 01:23:35.904610 1465 storage/replica_command.go:282  [split,n1,s1,r13/1:/{Table/16-Max}] initiating a split of this range at key /Table/17 [r14]
I180819 01:23:35.955064 1445 sql/event_log.go:126  [n1,intExec=create-default-db] Event: "create_database", target: 50, info: {DatabaseName:defaultdb Statement:CREATE DATABASE IF NOT EXISTS defaultdb User:root}
I180819 01:23:36.001334 1448 storage/replica_command.go:282  [split,n1,s1,r14/1:/{Table/17-Max}] initiating a split of this range at key /Table/18 [r15]
I180819 01:23:36.024554 1469 sql/event_log.go:126  [n1,intExec=create-default-db] Event: "create_database", target: 51, info: {DatabaseName:postgres Statement:CREATE DATABASE IF NOT EXISTS postgres User:root}
I180819 01:23:36.078168 1525 storage/replica_command.go:282  [split,n1,s1,r15/1:/{Table/18-Max}] initiating a split of this range at key /Table/19 [r16]
I180819 01:23:36.079761 879 server/server.go:1593  [n1] done ensuring all necessary migrations have run
I180819 01:23:36.079870 879 server/server.go:1596  [n1] serving sql connections
I180819 01:23:36.165197 1555 sql/event_log.go:126  [n1] Event: "node_join", target: 1, info: {Descriptor:{NodeID:1 Address:{NetworkField:tcp AddressField:127.0.0.1:36843} Attrs: Locality: ServerVersion:2.0-11 BuildTag:v2.1.0-alpha.20180702-1767-gdabd49b StartedAt:1534641814455141725 LocalityAddress:[]} ClusterID:45667a79-ab72-4df5-9c00-182ba82cedf6 StartedAt:1534641814455141725 LastUp:1534641814455141725}
I180819 01:23:36.179332 1441 server/server_update.go:67  [n1] no need to upgrade, cluster already at the newest version
I180819 01:23:36.189162 1588 storage/replica_command.go:282  [split,n1,s1,r16/1:/{Table/19-Max}] initiating a split of this range at key /Table/20 [r17]
I180819 01:23:36.234497 1593 storage/replica_command.go:282  [split,n1,s1,r17/1:/{Table/20-Max}] initiating a split of this range at key /Table/21 [r18]
I180819 01:23:36.280892 1618 storage/replica_command.go:282  [split,n1,s1,r18/1:/{Table/21-Max}] initiating a split of this range at key /Table/22 [r19]
I180819 01:23:36.315561 1545 storage/replica_command.go:282  [split,n1,s1,r19/1:/{Table/22-Max}] initiating a split of this range at key /Table/23 [r20]
I180819 01:23:36.369749 1651 storage/replica_command.go:282  [split,n1,s1,r20/1:/{Table/23-Max}] initiating a split of this range at key /Table/50 [r21]
I180819 01:23:36.414759 1573 storage/replica_command.go:282  [split,n1,s1,r21/1:/{Table/50-Max}] initiating a split of this range at key /Table/51 [r22]
W180819 01:23:36.703803 879 server/status/runtime.go:294  [n?] Could not parse build timestamp: parsing time "" as "2006/01/02 15:04:05": cannot parse "" as "2006"
I180819 01:23:36.758457 879 server/server.go:829  [n?] monitoring forward clock jumps based on server.clock.forward_jump_check_enabled
I180819 01:23:36.759051 879 base/addr_validation.go:260  [n?] server certificate addresses: IP=127.0.0.1,::1; DNS=localhost,*.local; CN=node
I180819 01:23:36.759192 879 base/addr_validation.go:300  [n?] web UI certificate addresses: IP=127.0.0.1,::1; DNS=localhost,*.local; CN=node
I180819 01:23:36.776845 879 server/config.go:496  [n?] 1 storage engine initialized
I180819 01:23:36.776987 879 server/config.go:499  [n?] RocksDB cache size: 128 MiB
I180819 01:23:36.777066 879 server/config.go:499  [n?] store 0: in-memory, size 0 B
W180819 01:23:36.777391 879 gossip/gossip.go:1351  [n?] no incoming or outgoing connections
I180819 01:23:36.810846 879 server/server.go:1402  [n?] no stores bootstrapped and --join flag specified, awaiting init command.
I180819 01:23:36.886079 1665 gossip/client.go:129  [n?] started gossip client to 127.0.0.1:36843
I180819 01:23:36.889867 1615 gossip/server.go:219  [n1] received initial cluster-verification connection from {tcp 127.0.0.1:43715}
I180819 01:23:36.926542 879 server/node.go:697  [n?] connecting to gossip network to verify cluster ID...
I180819 01:23:36.926770 879 server/node.go:722  [n?] node connected via gossip and verified as part of cluster "45667a79-ab72-4df5-9c00-182ba82cedf6"
I180819 01:23:36.930975 1781 kv/dist_sender.go:345  [n?] unable to determine this node's attributes for replica selection; node is most likely bootstrapping
I180819 01:23:36.942244 1780 kv/dist_sender.go:345  [n?] unable to determine this node's attributes for replica selection; node is most likely bootstrapping
I180819 01:23:36.948064 879 kv/dist_sender.go:345  [n?] unable to determine this node's attributes for replica selection; node is most likely bootstrapping
I180819 01:23:36.955375 879 server/node.go:428  [n?] new node allocated ID 2
I180819 01:23:36.955850 879 gossip/gossip.go:383  [n2] NodeDescriptor set to node_id:2 address:<network_field:"tcp" address_field:"127.0.0.1:43715" > attrs:<> locality:<> ServerVersion:<major_val:2 minor_val:0 patch:0 unstable:11 > build_tag:"v2.1.0-alpha.20180702-1767-gdabd49b" started_at:1534641816955514150 
I180819 01:23:36.957886 879 storage/stores.go:242  [n2] read 0 node addresses from persistent storage
I180819 01:23:36.958276 879 storage/stores.go:261  [n2] wrote 1 node addresses to persistent storage
I180819 01:23:36.959491 1675 storage/stores.go:261  [n1] wrote 1 node addresses to persistent storage
I180819 01:23:36.987834 879 server/node.go:672  [n2] bootstrapped store [n2,s2]
I180819 01:23:36.994319 879 server/node.go:546  [n2] node=2: started with [] engine(s) and attributes []
I180819 01:23:37.037340 879 server/status/recorder.go:652  [n2] available memory from cgroups (8.0 EiB) exceeds system memory 16 GiB, using system memory
I180819 01:23:37.037493 879 server/server.go:1806  [n2] Could not start heap profiler worker due to: directory to store profiles could not be determined
I180819 01:23:37.055704 1871 server/status/recorder.go:652  [n2,summaries] available memory from cgroups (8.0 EiB) exceeds system memory 16 GiB, using system memory
I180819 01:23:37.065129 879 server/server.go:1537  [n2] starting https server at 127.0.0.1:37845 (use: 127.0.0.1:37845)
I180819 01:23:37.065256 879 server/server.go:1539  [n2] starting grpc/postgres server at 127.0.0.1:43715
I180819 01:23:37.065308 879 server/server.go:1540  [n2] advertising CockroachDB node at 127.0.0.1:43715
I180819 01:23:37.088526 1616 storage/replica_raftstorage.go:529  [replicate,n1,s1,r2/1:/System/{-NodeLive…}] generated preemptive snapshot f72498b2 at index 26
I180819 01:23:37.130037 879 server/server.go:1593  [n2] done ensuring all necessary migrations have run
I180819 01:23:37.130185 879 server/server.go:1596  [n2] serving sql connections
I180819 01:23:37.184800 1902 rpc/nodedialer/nodedialer.go:92  [ct-client] connection to n1 established
I180819 01:23:37.355191 1697 sql/event_log.go:126  [n2] Event: "node_join", target: 2, info: {Descriptor:{NodeID:2 Address:{NetworkField:tcp AddressField:127.0.0.1:43715} Attrs: Locality: ServerVersion:2.0-11 BuildTag:v2.1.0-alpha.20180702-1767-gdabd49b StartedAt:1534641816955514150 LocalityAddress:[]} ClusterID:45667a79-ab72-4df5-9c00-182ba82cedf6 StartedAt:1534641816955514150 LastUp:1534641816955514150}
I180819 01:23:37.392822 1616 rpc/nodedialer/nodedialer.go:92  [replicate,n1,s1,r2/1:/System/{-NodeLive…}] connection to n2 established
I180819 01:23:37.393747 1908 rpc/nodedialer/nodedialer.go:92  [ct-client] connection to n2 established
I180819 01:23:37.394721 1695 server/server_update.go:67  [n2] no need to upgrade, cluster already at the newest version
W180819 01:23:37.406580 879 server/status/runtime.go:294  [n?] Could not parse build timestamp: parsing time "" as "2006/01/02 15:04:05": cannot parse "" as "2006"
I180819 01:23:37.409591 1616 storage/store_snapshot.go:655  [replicate,n1,s1,r2/1:/System/{-NodeLive…}] streamed snapshot to (n2,s2):?: kv pairs: 9, log entries: 7, rate-limit: 2.0 MiB/sec, 16ms
I180819 01:23:37.417736 1986 storage/replica_raftstorage.go:773  [n2,s2,r2/?:{-}] applying preemptive snapshot at index 26 (id=f72498b2, encoded size=57043, 1 rocksdb batches, 7 log entries)
I180819 01:23:37.443633 1986 storage/replica_raftstorage.go:779  [n2,s2,r2/?:/System/{-NodeLive…}] applied preemptive snapshot in 26ms [clear=0ms batch=0ms entries=24ms commit=1ms]
I180819 01:23:37.451060 1616 storage/replica_command.go:792  [replicate,n1,s1,r2/1:/System/{-NodeLive…}] change replicas (ADD_REPLICA (n2,s2):2): read existing descriptor r2:/System/{-NodeLiveness} [(n1,s1):1, next=2, gen=842355010600]
I180819 01:23:37.483271 1616 storage/replica.go:3749  [n1,s1,r2/1:/System/{-NodeLive…}] proposing ADD_REPLICA((n2,s2):2): updated=[(n1,s1):1 (n2,s2):2] next=3
I180819 01:23:37.553892 879 server/server.go:829  [n?] monitoring forward clock jumps based on server.clock.forward_jump_check_enabled
I180819 01:23:37.557687 879 base/addr_validation.go:260  [n?] server certificate addresses: IP=127.0.0.1,::1; DNS=localhost,*.local; CN=node
I180819 01:23:37.557857 879 base/addr_validation.go:300  [n?] web UI certificate addresses: IP=127.0.0.1,::1; DNS=localhost,*.local; CN=node
I180819 01:23:37.580365 879 server/config.go:496  [n?] 1 storage engine initialized
I180819 01:23:37.580523 879 server/config.go:499  [n?] RocksDB cache size: 128 MiB
I180819 01:23:37.580591 879 server/config.go:499  [n?] store 0: in-memory, size 0 B
W180819 01:23:37.580841 879 gossip/gossip.go:1351  [n?] no incoming or outgoing connections
I180819 01:23:37.607182 879 server/server.go:1402  [n?] no stores bootstrapped and --join flag specified, awaiting init command.
I180819 01:23:37.607799 2013 storage/replica_range_lease.go:554  [replicate,n1,s1,r2/1:/System/{-NodeLive…}] transferring lease to s2
I180819 01:23:37.609612 2014 util/protoutil/randnullability.go:94  inserting null for (roachpb.Lease).DeprecatedStartStasis: false
I180819 01:23:37.618847 2013 storage/replica_range_lease.go:617  [replicate,n1,s1,r2/1:/System/{-NodeLive…}] done transferring lease to s2: <nil>
I180819 01:23:37.631886 1985 storage/replica_raftstorage.go:529  [replicate,n1,s1,r19/1:/Table/2{2-3}] generated preemptive snapshot e201281c at index 18
I180819 01:23:37.639673 1985 storage/store_snapshot.go:655  [replicate,n1,s1,r19/1:/Table/2{2-3}] streamed snapshot to (n2,s2):?: kv pairs: 8, log entries: 8, rate-limit: 2.0 MiB/sec, 6ms
I180819 01:23:37.641527 1934 storage/replica_raftstorage.go:773  [n2,s2,r19/?:{-}] applying preemptive snapshot at index 18 (id=e201281c, encoded size=2414, 1 rocksdb batches, 8 log entries)
I180819 01:23:37.643670 1934 storage/replica_raftstorage.go:779  [n2,s2,r19/?:/Table/2{2-3}] applied preemptive snapshot in 2ms [clear=0ms batch=0ms entries=1ms commit=0ms]
I180819 01:23:37.648288 1985 storage/replica_command.go:792  [replicate,n1,s1,r19/1:/Table/2{2-3}] change replicas (ADD_REPLICA (n2,s2):2): read existing descriptor r19:/Table/2{2-3} [(n1,s1):1, next=2, gen=842370167088]
I180819 01:23:37.698675 2027 gossip/client.go:129  [n?] started gossip client to 127.0.0.1:36843
I180819 01:23:37.700586 2037 gossip/server.go:219  [n1] received initial cluster-verification connection from {tcp 127.0.0.1:33283}
I180819 01:23:37.707785 2082 storage/replica_command.go:792  [replicate,n2,s2,r2/2:/System/{-NodeLive…}] change replicas (REMOVE_REPLICA (n1,s1):1): read existing descriptor r2:/System/{-NodeLiveness} [(n1,s1):1, (n2,s2):2, next=3, gen=842354509080]
I180819 01:23:37.724825 879 server/node.go:697  [n?] connecting to gossip network to verify cluster ID...
I180819 01:23:37.725470 879 server/node.go:722  [n?] node connected via gossip and verified as part of cluster "45667a79-ab72-4df5-9c00-182ba82cedf6"
I180819 01:23:37.741117 2103 kv/dist_sender.go:345  [n?] unable to determine this node's attributes for replica selection; node is most likely bootstrapping
I180819 01:23:37.743165 1985 storage/replica.go:3749  [n1,s1,r19/1:/Table/2{2-3}] proposing ADD_REPLICA((n2,s2):2): updated=[(n1,s1):1 (n2,s2):2] next=3
I180819 01:23:37.751850 2038 kv/dist_sender.go:345  [n?] unable to determine this node's attributes for replica selection; node is most likely bootstrapping
I180819 01:23:37.756908 879 kv/dist_sender.go:345  [n?] unable to determine this node's attributes for replica selection; node is most likely bootstrapping
I180819 01:23:37.763085 879 server/node.go:428  [n?] new node allocated ID 3
I180819 01:23:37.763562 879 gossip/gossip.go:383  [n3] NodeDescriptor set to node_id:3 address:<network_field:"tcp" address_field:"127.0.0.1:33283" > attrs:<> locality:<> ServerVersion:<major_val:2 minor_val:0 patch:0 unstable:11 > build_tag:"v2.1.0-alpha.20180702-1767-gdabd49b" started_at:1534641817763226700 
I180819 01:23:37.764317 879 storage/stores.go:242  [n3] read 0 node addresses from persistent storage
I180819 01:23:37.764628 879 storage/stores.go:261  [n3] wrote 2 node addresses to persistent storage
I180819 01:23:37.765626 1888 storage/replica_range_lease.go:554  [replicate,n1,s1,r19/1:/Table/2{2-3}] transferring lease to s2
I180819 01:23:37.766659 1889 util/protoutil/randnullability.go:94  inserting null for (roachpb.Lease).Expiration: false
I180819 01:23:37.766963 2085 storage/stores.go:261  [n1] wrote 2 node addresses to persistent storage
I180819 01:23:37.775899 2151 storage/stores.go:261  [n2] wrote 2 node addresses to persistent storage
I180819 01:23:37.783834 1888 storage/replica_range_lease.go:617  [replicate,n1,s1,r19/1:/Table/2{2-3}] done transferring lease to s2: <nil>
I180819 01:23:37.785311 1807 storage/replica_proposal.go:214  [n2,s2,r19/2:/Table/2{2-3}] new range lease repl=(n2,s2):2 seq=3 start=1534641817.765833796,0 epo=1 pro=1534641817.765846484,0 following repl=(n1,s1):1 seq=2 start=1534641814.459320444,0 exp=1534641823.489315711,0 pro=1534641814.489348354,0
I180819 01:23:37.786036 2040 storage/replica_raftstorage.go:529  [replicate,n1,s1,r5/1:/System/ts{d-e}] generated preemptive snapshot 7a122cac at index 23
I180819 01:23:37.848738 879 server/node.go:672  [n3] bootstrapped store [n3,s3]
I180819 01:23:37.862868 879 server/node.go:546  [n3] node=3: started with [] engine(s) and attributes []
I180819 01:23:37.864839 879 server/status/recorder.go:652  [n3] available memory from cgroups (8.0 EiB) exceeds system memory 16 GiB, using system memory
I180819 01:23:37.864979 879 server/server.go:1806  [n3] Could not start heap profiler worker due to: directory to store profiles could not be determined
I180819 01:23:37.868352 2243 server/status/recorder.go:652  [n3,summaries] available memory from cgroups (8.0 EiB) exceeds system memory 16 GiB, using system memory
I180819 01:23:37.889191 2040 storage/store_snapshot.go:655  [replicate,n1,s1,r5/1:/System/ts{d-e}] streamed snapshot to (n2,s2):?: kv pairs: 931, log entries: 13, rate-limit: 2.0 MiB/sec, 82ms
I180819 01:23:37.904267 2082 storage/replica.go:3749  [n2,s2,r2/2:/System/{-NodeLive…}] proposing REMOVE_REPLICA((n1,s1):1): updated=[(n2,s2):2] next=3
I180819 01:23:37.905885 2120 storage/replica_raftstorage.go:773  [n2,s2,r5/?:{-}] applying preemptive snapshot at index 23 (id=7a122cac, encoded size=152793, 1 rocksdb batches, 13 log entries)
I180819 01:23:37.907314 879 server/server.go:1537  [n3] starting https server at 127.0.0.1:37345 (use: 127.0.0.1:37345)
I180819 01:23:37.907439 879 server/server.go:1539  [n3] starting grpc/postgres server at 127.0.0.1:33283
I180819 01:23:37.907483 879 server/server.go:1540  [n3] advertising CockroachDB node at 127.0.0.1:33283
I180819 01:23:37.921436 879 server/server.go:1593  [n3] done ensuring all necessary migrations have run
I180819 01:23:37.921566 879 server/server.go:1596  [n3] serving sql connections
I180819 01:23:37.936832 2021 storage/store.go:3691  [n1,s1,r2/1:/System/{-NodeLive…}] added to replica GC queue (peer suggestion)
I180819 01:23:37.954046 2170 storage/store.go:2586  [replicaGC,n1,s1,r2/1:/System/{-NodeLive…}] removing replica
I180819 01:23:37.955544 2170 storage/replica.go:880  [replicaGC,n1,s1,r2/1:/System/{-NodeLive…}] removed 8 (0+8) keys in 1ms [clear=0ms commit=0ms]
I180819 01:23:37.961092 2245 storage/replica_command.go:792  [replicate,n2,s2,r19/2:/Table/2{2-3}] change replicas (REMOVE_REPLICA (n1,s1):1): read existing descriptor r19:/Table/2{2-3} [(n1,s1):1, (n2,s2):2, next=3, gen=842369891704]
I180819 01:23:37.990467 2120 storage/replica_raftstorage.go:779  [n2,s2,r5/?:/System/ts{d-e}] applied preemptive snapshot in 84ms [clear=0ms batch=1ms entries=82ms commit=1ms]
I180819 01:23:38.022207 2040 storage/replica_command.go:792  [replicate,n1,s1,r5/1:/System/ts{d-e}] change replicas (ADD_REPLICA (n2,s2):2): read existing descriptor r5:/System/ts{d-e} [(n1,s1):1, next=2, gen=842360747224]
I180819 01:23:38.047655 2284 rpc/nodedialer/nodedialer.go:92  [ct-client] connection to n1 established
I180819 01:23:38.252324 2040 storage/replica.go:3749  [n1,s1,r5/1:/System/ts{d-e}] proposing ADD_REPLICA((n2,s2):2): updated=[(n1,s1):1 (n2,s2):2] next=3
I180819 01:23:38.255163 2173 sql/event_log.go:126  [n3] Event: "node_join", target: 3, info: {Descriptor:{NodeID:3 Address:{NetworkField:tcp AddressField:127.0.0.1:33283} Attrs: Locality: ServerVersion:2.0-11 BuildTag:v2.1.0-alpha.20180702-1767-gdabd49b StartedAt:1534641817763226700 LocalityAddress:[]} ClusterID:45667a79-ab72-4df5-9c00-182ba82cedf6 StartedAt:1534641817763226700 LastUp:1534641817763226700}
I180819 01:23:38.356574 2171 server/server_update.go:67  [n3] no need to upgrade, cluster already at the newest version
I180819 01:23:38.383591 2296 rpc/nodedialer/nodedialer.go:92  [ct-client] connection to n3 established
I180819 01:23:38.384751 2305 storage/replica_range_lease.go:554  [replicate,n1,s1,r5/1:/System/ts{d-e}] transferring lease to s2
I180819 01:23:38.397154 2283 rpc/nodedialer/nodedialer.go:92  [ct-client] connection to n2 established
I180819 01:23:38.421429 2305 storage/replica_range_lease.go:617  [replicate,n1,s1,r5/1:/System/ts{d-e}] done transferring lease to s2: <nil>
I180819 01:23:38.429809 1831 storage/replica_proposal.go:214  [n2,s2,r5/2:/System/ts{d-e}] new range lease repl=(n2,s2):2 seq=3 start=1534641818.384888966,0 epo=1 pro=1534641818.384907739,0 following repl=(n1,s1):1 seq=2 start=1534641814.459320444,0 exp=1534641823.489315711,0 pro=1534641814.489348354,0
I180819 01:23:38.445106 2408 storage/replica_raftstorage.go:529  [replicate,n1,s1,r14/1:/Table/1{7-8}] generated preemptive snapshot a42f678b at index 19
I180819 01:23:38.472058 2314 sql/event_log.go:126  [n1,client=127.0.0.1:55592,user=root] Event: "create_database", target: 52, info: {DatabaseName:data Statement:CREATE DATABASE data User:root}
I180819 01:23:38.494601 2423 storage/replica_command.go:282  [split,n1,s1,r22/1:/{Table/51-Max}] initiating a split of this range at key /Table/52 [r23]
I180819 01:23:38.507907 2408 rpc/nodedialer/nodedialer.go:92  [replicate,n1,s1,r14/1:/Table/1{7-8}] connection to n3 established
I180819 01:23:38.512095 2161 rpc/nodedialer/nodedialer.go:92  [ct-client] connection to n3 established
I180819 01:23:38.522488 2408 storage/store_snapshot.go:655  [replicate,n1,s1,r14/1:/Table/1{7-8}] streamed snapshot to (n3,s3):?: kv pairs: 9, log entries: 9, rate-limit: 2.0 MiB/sec, 4ms
I180819 01:23:38.524544 2430 storage/replica_raftstorage.go:773  [n3,s3,r14/?:{-}] applying preemptive snapshot at index 19 (id=a42f678b, encoded size=2553, 1 rocksdb batches, 9 log entries)
I180819 01:23:38.527205 2430 storage/replica_raftstorage.go:779  [n3,s3,r14/?:/Table/1{7-8}] applied preemptive snapshot in 2ms [clear=0ms batch=0ms entries=1ms commit=0ms]
I180819 01:23:38.530465 2408 storage/replica_command.go:792  [replicate,n1,s1,r14/1:/Table/1{7-8}] change replicas (ADD_REPLICA (n3,s3):2): read existing descriptor r14:/Table/1{7-8} [(n1,s1):1, next=2, gen=842370875480]
I180819 01:23:38.564624 2245 storage/replica.go:3749  [n2,s2,r19/2:/Table/2{2-3}] proposing REMOVE_REPLICA((n1,s1):1): updated=[(n2,s2):2] next=3
I180819 01:23:38.595301 2021 storage/store.go:3691  [n1,s1,r19/1:/Table/2{2-3}] added to replica GC queue (peer suggestion)
I180819 01:23:38.627976 2408 storage/replica.go:3749  [n1,s1,r14/1:/Table/1{7-8}] proposing ADD_REPLICA((n3,s3):2): updated=[(n1,s1):1 (n3,s3):2] next=3
I180819 01:23:38.652371 2314 sql/event_log.go:126  [n1,client=127.0.0.1:55592,user=root] Event: "set_cluster_setting", target: 0, info: {SettingName:server.declined_reservation_timeout Value:0s User:root}
I180819 01:23:38.718094 2469 storage/replica_command.go:792  [replicate,n2,s2,r5/2:/System/ts{d-e}] change replicas (REMOVE_REPLICA (n1,s1):1): read existing descriptor r5:/System/ts{d-e} [(n1,s1):1, (n2,s2):2, next=3, gen=842392225632]
I180819 01:23:38.718536 2502 storage/replica_range_lease.go:554  [replicate,n1,s1,r14/1:/Table/1{7-8}] transferring lease to s3
I180819 01:23:38.732798 2073 storage/replica_proposal.go:214  [n3,s3,r14/2:/Table/1{7-8}] new range lease repl=(n3,s3):2 seq=3 start=1534641818.719138843,0 epo=1 pro=1534641818.719172126,0 following repl=(n1,s1):1 seq=2 start=1534641814.459320444,0 exp=1534641823.489315711,0 pro=1534641814.489348354,0
E180819 01:23:38.766106 2504 storage/queue.go:788  [replicate,n3,s3,r14/2:/Table/1{7-8}] no removable replicas from range that needs a removal: [raft progress unknown]
I180819 01:23:38.775738 2502 storage/replica_range_lease.go:617  [replicate,n1,s1,r14/1:/Table/1{7-8}] done transferring lease to s3: <nil>
I180819 01:23:38.784611 2395 storage/replica_raftstorage.go:529  [replicate,n1,s1,r15/1:/Table/1{8-9}] generated preemptive snapshot 82dc70b4 at index 18
I180819 01:23:38.797453 2411 storage/store.go:2586  [replicaGC,n1,s1,r19/1:/Table/2{2-3}] removing replica
I180819 01:23:38.815668 2411 storage/replica.go:880  [replicaGC,n1,s1,r19/1:/Table/2{2-3}] removed 7 (0+7) keys in 14ms [clear=14ms commit=0ms]
I180819 01:23:38.820962 2395 storage/store_snapshot.go:655  [replicate,n1,s1,r15/1:/Table/1{8-9}] streamed snapshot to (n3,s3):?: kv pairs: 8, log entries: 8, rate-limit: 2.0 MiB/sec, 36ms
I180819 01:23:38.824320 2494 storage/replica_raftstorage.go:773  [n3,s3,r15/?:{-}] applying preemptive snapshot at index 18 (id=82dc70b4, encoded size=2414, 1 rocksdb batches, 8 log entries)
I180819 01:23:38.826823 2494 storage/replica_raftstorage.go:779  [n3,s3,r15/?:/Table/1{8-9}] applied preemptive snapshot in 2ms [clear=0ms batch=0ms entries=1ms commit=0ms]
I180819 01:23:38.852837 2395 storage/replica_command.go:792  [replicate,n1,s1,r15/1:/Table/1{8-9}] change replicas (ADD_REPLICA (n3,s3):2): read existing descriptor r15:/Table/1{8-9} [(n1,s1):1, next=2, gen=842392291616]
I180819 01:23:38.885819 2531 storage/replica_command.go:792  [replicate,n3,s3,r14/2:/Table/1{7-8}] change replicas (REMOVE_REPLICA (n1,s1):1): read existing descriptor r14:/Table/1{7-8} [(n1,s1):1, (n3,s3):2, next=3, gen=842385476976]
I180819 01:23:38.903208 2314 sql/event_log.go:126  [n1,client=127.0.0.1:55592,user=root] Event: "set_cluster_setting", target: 0, info: {SettingName:server.failed_reservation_timeout Value:0s User:root}
--- FAIL: testrace/TestInitialPartitioning: TestInitialPartitioning/TIMESTAMP (46.610s)
soon.go:49: condition failed to evaluate within 45s: expected to scan on n1: SELECT count(*) FROM "TIMESTAMP" WHERE a < '1970-01-08 21:12:24.000309+00:00':::TIMESTAMP
    [n3,s3,r7/2:/Table/{SystemCon…-11}] read completed
    full trace:
       === SPAN START: session recording ===
      [n1,client=127.0.0.1:55592,user=root] [NoTxn pos:4632] executing ExecStmt: SELECT count(*) FROM "TIMESTAMP" WHERE a < '1970-01-08 21:12:24.000309+00:00':::TIMESTAMP
      [n1,client=127.0.0.1:55592,user=root] executing: SELECT count(*) FROM "TIMESTAMP" WHERE a < '1970-01-08 21:12:24.000309+00:00':::TIMESTAMP in state: NoTxn
       === SPAN START: sql txn ===
      [n1,client=127.0.0.1:55592,user=root] [Open pos:4632] executing ExecStmt: SELECT count(*) FROM "TIMESTAMP" WHERE a < '1970-01-08 21:12:24.000309+00:00':::TIMESTAMP
      [n1,client=127.0.0.1:55592,user=root] executing: SELECT count(*) FROM "TIMESTAMP" WHERE a < '1970-01-08 21:12:24.000309+00:00':::TIMESTAMP in state: Open
      [n1,client=127.0.0.1:55592,user=root] planning starts: SELECT
      [n1,client=127.0.0.1:55592,user=root] generating optimizer plan
       === SPAN START: txn coordinator send ===
       === SPAN START: dist sender send ===
      [client=127.0.0.1:55592,user=root,txn=4beee5fc,n1] querying next range at /Table/2/1/52/"TIMESTAMP"/3/1
      [client=127.0.0.1:55592,user=root,txn=4beee5fc,n1] r7: sending batch 1 Get to (n3,s3):2
      [client=127.0.0.1:55592,user=root,txn=4beee5fc,n1] sending request to 127.0.0.1:33283
       === SPAN START: /cockroach.roachpb.Internal/Batch ===
       === SPAN START: /cockroach.roachpb.Internal/Batch ===
      [n3] 1 Get
      [n3,s3] executing 1 requests
      [n3,s3,r7/2:/Table/{SystemCon…-11}] read-only path
      [n3,s3,r7/2:/Table/{SystemCon…-11}] command queue
      [n3,s3,r7/2:/Table/{SystemCon…-11}] waiting for read lock
      [n3,s3,r7/2:/Table/{SystemCon…-11}] read completed
       === SPAN START: txn coordinator send ===
      [n1,client=127.0.0.1:55592,user=root] client.Txn did AutoCommit. err: <nil>

      [n1,client=127.0.0.1:55592,user=root] added table 'data.public."TIMESTAMP"' to table collection
      [n1,client=127.0.0.1:55592,user=root] optimizer plan succeeded
      [n1,client=127.0.0.1:55592,user=root] planning ends
      [n1,client=127.0.0.1:55592,user=root] checking distributability
      [n1,client=127.0.0.1:55592,user=root] distributable plan: true
      [n1,client=127.0.0.1:55592,user=root] execution starts: distributed
       === SPAN START: consuming rows ===
      [n1,client=127.0.0.1:55592,user=root] creating DistSQL plan with distributedMode=true
      [n1,client=127.0.0.1:55592,user=root] querying next range at /Table/82/1
      [n1,client=127.0.0.1:55592,user=root] running DistSQL plan
       === SPAN START: flow ===
      [n1,client=127.0.0.1:55592,user=root] starting (1 processors, 0 startables)
       === SPAN START: table reader ===
    cockroach.processorid: 0
    cockroach.stat.tablereader.input.rows: 0
    cockroach.stat.tablereader.stalltime: 2µs
      [n1,client=127.0.0.1:55592,user=root] Scan /Table/82/1{-/1970-01-08T21:12:24.000308Z/#}
       === SPAN START: txn coordinator send ===
       === SPAN START: dist sender send ===
      [client=127.0.0.1:55592,user=root,txn=94af5538,n1] querying next range at /Table/82/1
      [client=127.0.0.1:55592,user=root,txn=94af5538,n1] r242: sending batch 1 Scan to (n1,s1):2
      [client=127.0.0.1:55592,user=root,txn=94af5538,n1] sending request to local server
       === SPAN START: /cockroach.roachpb.Internal/Batch ===
      [n1] 1 Scan
      [n1,s1] executing 1 requests
      [n1,s1,r242/2:/Table/82/1{-/1970-0…}] read-only path
      [n1,s1,r242/2:/Table/82/1{-/1970-0…}] read has no clock uncertainty
      [n1,s1,r242/2:/Table/82/1{-/1970-0…}] command queue
      [n1,s1,r242/2:/Table/82/1{-/1970-0…}] waiting for read lock
      [n1,s1,r242/2:/Table/82/1{-/1970-0…}] read completed
       === SPAN START: ordered aggregator ===
    cockroach.processorid: 1
    cockroach.stat.aggregator.input.rows: 0
    cockroach.stat.aggregator.stalltime: 322µs
    cockroach.stat.aggregator.mem.max: 10 KiB
      [n1,client=127.0.0.1:55592,user=root] accumulation complete
      [n1,client=127.0.0.1:55592,user=root] exiting aggregator
      [n1,client=127.0.0.1:55592,user=root] execution ends
      [n1,client=127.0.0.1:55592,user=root] rows affected: 1
       === SPAN START: txn coordinator send ===
      [n1,client=127.0.0.1:55592,user=root] AutoCommit. err: <nil>
      [n1,client=127.0.0.1:55592,user=root] releasing 1 tables
      [n1,client=127.0.0.1:55592,user=root] [NoTxn pos:4633] executing ExecStmt: SET TRACING = off
      [n1,client=127.0.0.1:55592,user=root] executing: SET TRACING = off in state: NoTxn
    goroutine 32157 [running]:
    runtime/debug.Stack(0xa7a358200, 0xc423722750, 0x3da5740)
        /usr/local/go/src/runtime/debug/stack.go:24 +0xb5
    github.com/cockroachdb/cockroach/pkg/testutils.SucceedsSoon(0x3e06b60, 0xc420d46e10, 0xc423722720)
        /go/src/github.com/cockroachdb/cockroach/pkg/testutils/soon.go:50 +0x172
    github.com/cockroachdb/cockroach/pkg/ccl/partitionccl.TestInitialPartitioning.func1(0xc420d46e10)
        /go/src/github.com/cockroachdb/cockroach/pkg/ccl/partitionccl/partition_test.go:1156 +0x258
    testing.tRunner(0xc420d46e10, 0xc422a645a0)
        /usr/local/go/src/testing/testing.go:777 +0x16e
    created by testing.(*T).Run
        /usr/local/go/src/testing/testing.go:824 +0x565
------- Stdout: -------
I180819 01:26:02.049129 32214 storage/replica.go:3749  [n2,s2,r239/3:/Table/81/1/27196703921030489…] proposing REMOVE_REPLICA((n3,s3):1): updated=[(n2,s2):3] next=4
I180819 01:26:02.087520 4532 storage/store.go:3691  [n3,s3,r239/1:/Table/81/1/27196703921030489…] added to replica GC queue (peer suggestion)
I180819 01:26:02.102571 32098 storage/store.go:2586  [replicaGC,n3,s3,r239/1:/Table/81/1/27196703921030489…] removing replica
I180819 01:26:02.105404 32098 storage/replica.go:880  [replicaGC,n3,s3,r239/1:/Table/81/1/27196703921030489…] removed 7 (0+7) keys in 1ms [clear=1ms commit=0ms]
I180819 01:26:02.233244 2314 sql/event_log.go:126  [n1,client=127.0.0.1:55592,user=root] Event: "create_table", target: 82, info: {TableName:data.public."TIMESTAMP" Statement:CREATE TABLE "TIMESTAMP" (a TIMESTAMP PRIMARY KEY) PARTITION BY LIST (a) (PARTITION p VALUES IN ('1970-01-08 21:12:24.000309+00:00':::TIMESTAMP)) User:root}
I180819 01:26:02.278652 32225 storage/replica_command.go:282  [split,n3,s3,r237/1:/{Table/81/2-Max}] initiating a split of this range at key /Table/82 [r241]
I180819 01:26:02.361250 2190 storage/replica_proposal.go:214  [n3,s3,r237/1:/{Table/81/2-Max}] new range lease repl=(n3,s3):1 seq=6 start=1534641886.761263311,0 epo=1 pro=1534641886.761309636,0 following repl=(n3,s3):1 seq=6 start=1534641886.761263311,0 epo=1 pro=1534641886.761309636,0
I180819 01:26:02.755572 2314 sql/event_log.go:126  [n1,client=127.0.0.1:55592,user=root] Event: "set_zone_config", target: 82, info: {Target:data."TIMESTAMP"@primary Config:constraints: [+n1] User:root}
I180819 01:26:02.833192 32258 storage/replica_command.go:282  [split,n3,s3,r241/1:/{Table/82-Max}] initiating a split of this range at key /Table/82/1 [r242]
I180819 01:26:02.959059 2201 storage/replica_proposal.go:214  [n3,s3,r241/1:/{Table/82-Max}] new range lease repl=(n3,s3):1 seq=6 start=1534641886.761263311,0 epo=1 pro=1534641886.761309636,0 following repl=(n3,s3):1 seq=6 start=1534641886.761263311,0 epo=1 pro=1534641886.761309636,0
I180819 01:26:02.970897 32390 storage/replica_command.go:282  [split,n3,s3,r242/1:/{Table/82/1-Max}] initiating a split of this range at key /Table/82/2 [r243]
I180819 01:26:03.075598 2046 storage/replica_proposal.go:214  [n3,s3,r242/1:/{Table/82/1-Max}] new range lease repl=(n3,s3):1 seq=6 start=1534641886.761263311,0 epo=1 pro=1534641886.761309636,0 following repl=(n3,s3):1 seq=6 start=1534641886.761263311,0 epo=1 pro=1534641886.761309636,0
I180819 01:26:03.123650 32344 storage/replica_raftstorage.go:529  [replicate,n3,s3,r242/1:/Table/82/{1-2}] generated preemptive snapshot 8ea3a483 at index 16
I180819 01:26:03.144009 2314 sql/event_log.go:126  [n1,client=127.0.0.1:55592,user=root] Event: "set_zone_config", target: 82, info: {Target:data."TIMESTAMP".p Config:constraints: [+n2] User:root}
I180819 01:26:03.153502 32157 ccl/partitionccl/partition_test.go:200  query: SELECT count(*) FROM "TIMESTAMP" WHERE a < '1970-01-08 21:12:24.000309+00:00':::TIMESTAMP
I180819 01:26:03.186678 32344 storage/store_snapshot.go:655  [replicate,n3,s3,r242/1:/Table/82/{1-2}] streamed snapshot to (n1,s1):?: kv pairs: 7, log entries: 6, rate-limit: 2.0 MiB/sec, 34ms
I180819 01:26:03.192065 32411 storage/replica_raftstorage.go:773  [n1,s1,r242/?:{-}] applying preemptive snapshot at index 16 (id=8ea3a483, encoded size=1944, 1 rocksdb batches, 6 log entries)
I180819 01:26:03.207922 32411 storage/replica_raftstorage.go:779  [n1,s1,r242/?:/Table/82/{1-2}] applied preemptive snapshot in 16ms [clear=0ms batch=0ms entries=14ms commit=0ms]
I180819 01:26:03.225994 32344 storage/replica_command.go:792  [replicate,n3,s3,r242/1:/Table/82/{1-2}] change replicas (ADD_REPLICA (n1,s1):2): read existing descriptor r242:/Table/82/{1-2} [(n3,s3):1, next=2, gen=842357802616]
I180819 01:26:03.238056 32437 storage/replica_command.go:282  [split,n3,s3,r242/1:/Table/82/{1-2}] initiating a split of this range at key /Table/82/1/1970-01-08T21:12:24.000309Z [r244]
I180819 01:26:03.388628 32344 storage/replica.go:3749  [n3,s3,r242/1:/Table/82/{1-2}] proposing ADD_REPLICA((n1,s1):2): updated=[(n3,s3):1 (n1,s1):2] next=3
I180819 01:26:03.413507 32437 storage/split_queue.go:135  [split,n3,s3,r242/1:/Table/82/{1-2}] split saw concurrent descriptor modification; maybe retrying
I180819 01:26:03.415001 32455 storage/replica_command.go:282  [split,n3,s3,r242/1:/Table/82/{1-2}] initiating a split of this range at key /Table/82/1/1970-01-08T21:12:24.000309Z [r245]
I180819 01:26:03.581059 2043 storage/replica_proposal.go:214  [n3,s3,r242/1:/Table/82/{1-2}] new range lease repl=(n3,s3):1 seq=6 start=1534641886.761263311,0 epo=1 pro=1534641886.761309636,0 following repl=(n3,s3):1 seq=6 start=1534641886.761263311,0 epo=1 pro=1534641886.761309636,0
I180819 01:26:03.610902 32485 storage/replica_range_lease.go:554  [replicate,n3,s3,r242/1:/Table/82/1{-/1970-0…}] transferring lease to s1
I180819 01:26:03.612779 32370 storage/replica_command.go:282  [split,n3,s3,r245/1:/Table/82/{1/1970-…-2}] initiating a split of this range at key /Table/82/1/1970-01-08T21:12:24.000309001Z [r246]
I180819 01:26:03.636665 1215 storage/replica_proposal.go:214  [n1,s1,r242/2:/Table/82/1{-/1970-0…}] new range lease repl=(n1,s1):2 seq=7 start=1534641963.622462679,0 epo=1 pro=1534641963.622509656,0 following repl=(n3,s3):1 seq=6 start=1534641886.761263311,0 epo=1 pro=1534641886.761309636,0
I180819 01:26:03.645058 32485 storage/replica_range_lease.go:617  [replicate,n3,s3,r242/1:/Table/82/1{-/1970-0…}] done transferring lease to s1: <nil>
I180819 01:26:03.696986 32157 ccl/partitionccl/partition_test.go:200  query: SELECT count(*) FROM "TIMESTAMP" WHERE a < '1970-01-08 21:12:24.000309+00:00':::TIMESTAMP
I180819 01:26:03.781235 32157 ccl/partitionccl/partition_test.go:200  query: SELECT count(*) FROM "TIMESTAMP" WHERE a = '1970-01-08 21:12:24.000309+00:00':::TIMESTAMP
I180819 01:26:03.791862 2179 storage/replica_proposal.go:214  [n3,s3,r245/1:/Table/82/{1/1970-…-2}] new range lease repl=(n3,s3):1 seq=6 start=1534641886.761263311,0 epo=1 pro=1534641886.761309636,0 following repl=(n3,s3):1 seq=6 start=1534641886.761263311,0 epo=1 pro=1534641886.761309636,0
I180819 01:26:03.821585 32520 storage/replica_range_lease.go:554  [replicate,n3,s3,r245/1:/Table/82/1/1970-01-08T21:12:…] transferring lease to s1
I180819 01:26:03.884588 1182 storage/replica_proposal.go:214  [n1,s1,r245/2:/Table/82/1/1970-01-08T21:12:…] new range lease repl=(n1,s1):2 seq=7 start=1534641963.821730314,0 epo=1 pro=1534641963.821760764,0 following repl=(n3,s3):1 seq=6 start=1534641886.761263311,0 epo=1 pro=1534641886.761309636,0
I180819 01:26:03.886210 32520 storage/replica_range_lease.go:617  [replicate,n3,s3,r245/1:/Table/82/1/1970-01-08T21:12:…] done transferring lease to s1: <nil>
E180819 01:26:03.887840 32523 storage/queue.go:788  [replicate,n3,s3,r246/1:/Table/82/{1/1970-…-2}] no removable replicas from range that needs a removal: [raft progress unknown]
I180819 01:26:03.894982 32524 storage/replica_raftstorage.go:529  [replicate,n3,s3,r87/3:/Table/6{0/2-1}] generated preemptive snapshot 60fd5b6a at index 34
I180819 01:26:03.898971 32526 storage/replica_raftstorage.go:529  [replicate,n1,s1,r24/3:/Table/53{-/2}] generated preemptive snapshot 7520c214 at index 35
I180819 01:26:03.952056 32526 storage/store_snapshot.go:655  [replicate,n1,s1,r24/3:/Table/53{-/2}] streamed snapshot to (n2,s2):?: kv pairs: 12, log entries: 25, rate-limit: 2.0 MiB/sec, 52ms
I180819 01:26:03.953027 32524 storage/store_snapshot.go:655  [replicate,n3,s3,r87/3:/Table/6{0/2-1}] streamed snapshot to (n2,s2):?: kv pairs: 12, log entries: 24, rate-limit: 2.0 MiB/sec, 57ms
I180819 01:26:03.969157 32529 storage/replica_raftstorage.go:773  [n2,s2,r24/?:{-}] applying preemptive snapshot at index 35 (id=7520c214, encoded size=6856, 1 rocksdb batches, 25 log entries)
I180819 01:26:03.980922 32529 storage/replica_raftstorage.go:779  [n2,s2,r24/?:/Table/53{-/2}] applied preemptive snapshot in 12ms [clear=0ms batch=0ms entries=10ms commit=1ms]
I180819 01:26:03.981633 32542 storage/replica_raftstorage.go:773  [n2,s2,r87/?:{-}] applying preemptive snapshot at index 34 (id=60fd5b6a, encoded size=6780, 1 rocksdb batches, 24 log entries)
I180819 01:26:03.986932 32542 storage/replica_raftstorage.go:779  [n2,s2,r87/?:/Table/6{0/2-1}] applied preemptive snapshot in 4ms [clear=0ms batch=0ms entries=3ms commit=0ms]
I180819 01:26:03.997859 32526 storage/replica_command.go:792  [replicate,n1,s1,r24/3:/Table/53{-/2}] change replicas (ADD_REPLICA (n2,s2):4): read existing descriptor r24:/Table/53{-/2} [(n1,s1):3, next=4, gen=842378370528]
I180819 01:26:04.023747 32524 storage/replica_command.go:792  [replicate,n3,s3,r87/3:/Table/6{0/2-1}] change replicas (ADD_REPLICA (n2,s2):4): read existing descriptor r87:/Table/6{0/2-1} [(n3,s3):3, next=4, gen=842354033680]
I180819 01:26:04.061543 32157 ccl/partitionccl/partition_test.go:200  query: SELECT count(*) FROM "TIMESTAMP" WHERE a > '1970-01-08 21:12:24.000309+00:00':::TIMESTAMP
I180819 01:26:04.117836 32524 storage/replica.go:3749  [n3,s3,r87/3:/Table/6{0/2-1}] proposing ADD_REPLICA((n2,s2):4): updated=[(n3,s3):3 (n2,s2):4] next=5
I180819 01:26:04.167636 32526 storage/replica.go:3749  [n1,s1,r24/3:/Table/53{-/2}] proposing ADD_REPLICA((n2,s2):4): updated=[(n1,s1):3 (n2,s2):4] next=5
I180819 01:26:04.211885 32587 storage/replica_range_lease.go:554  [replicate,n3,s3,r87/3:/Table/6{0/2-1}] transferring lease to s2
I180819 01:26:04.248724 1808 storage/replica_proposal.go:214  [n2,s2,r87/4:/Table/6{0/2-1}] new range lease repl=(n2,s2):4 seq=7 start=1534641964.212044556,0 epo=1 pro=1534641964.212066268,0 following repl=(n3,s3):3 seq=6 start=1534641859.417231562,0 epo=1 pro=1534641859.417247847,0
I180819 01:26:04.249742 32587 storage/replica_range_lease.go:617  [replicate,n3,s3,r87/3:/Table/6{0/2-1}] done transferring lease to s2: <nil>
I180819 01:26:04.253750 32649 storage/replica_range_lease.go:554  [replicate,n1,s1,r24/3:/Table/53{-/2}] transferring lease to s2
E180819 01:26:04.256110 32481 storage/queue.go:788  [replicate,n3,s3,r246/1:/Table/82/{1/1970-…-2}] no removable replicas from range that needs a removal: [raft progress unknown]
I180819 01:26:04.286320 32157 ccl/partitionccl/partition_test.go:200  query: SELECT count(*) FROM "TIMESTAMP" WHERE a = '1970-01-08 21:12:24.000309+00:00':::TIMESTAMP
I180819 01:26:04.295210 32649 storage/replica_range_lease.go:617  [replicate,n1,s1,r24/3:/Table/53{-/2}] done transferring lease to s2: <nil>
I180819 01:26:04.304551 1817 storage/replica_proposal.go:214  [n2,s2,r24/4:/Table/53{-/2}] new range lease repl=(n2,s2):4 seq=5 start=1534641964.258379626,0 epo=1 pro=1534641964.258507149,0 following repl=(n1,s1):3 seq=4 start=1534641831.488937600,0 epo=1 pro=1534641831.488961634,0
I180819 01:26:04.333168 32634 storage/replica_command.go:792  [replicate,n1,s1,r242/2:/Table/82/1{-/1970-0…}] change replicas (REMOVE_REPLICA (n3,s3):1): read existing descriptor r242:/Table/82/1{-/1970-01-08T21:12:24.000309Z} [(n3,s3):1, (n1,s1):2, next=3, gen=842364552832]
I180819 01:26:04.455848 32157 ccl/partitionccl/partition_test.go:200  query: SELECT count(*) FROM "TIMESTAMP" WHERE a < '1970-01-08 21:12:24.000309+00:00':::TIMESTAMP
I180819 01:26:04.537765 32157 ccl/partitionccl/partition_test.go:200  query: SELECT count(*) FROM "TIMESTAMP" WHERE a = '1970-01-08 21:12:24.000309+00:00':::TIMESTAMP
I180819 01:26:04.667502 1253 server/status/runtime.go:433  [n1] runtime stats: 1015 MiB RSS, 648 goroutines, 62 MiB/5.8 MiB/88 MiB GO alloc/idle/total, 99 MiB/138 MiB CGO alloc/total, 3172.12cgo/sec, 1.74/0.07 %(u/s)time, 0.03 %gc (14x)
I180819 01:26:04.698870 32157 ccl/partitionccl/partition_test.go:200  query: SELECT count(*) FROM "TIMESTAMP" WHERE a < '1970-01-08 21:12:24.000309+00:00':::TIMESTAMP
I180819 01:26:04.699673 1237 server/status/recorder.go:652  [n1,summaries] available memory from cgroups (8.0 EiB) exceeds system memory 16 GiB, using system memory
I180819 01:26:04.771985 32157 ccl/partitionccl/partition_test.go:200  query: SELECT count(*) FROM "TIMESTAMP" WHERE a = '1970-01-08 21:12:24.000309+00:00':::TIMESTAMP
I180819 01:26:04.932411 32634 storage/replica.go:3749  [n1,s1,r242/2:/Table/82/1{-/1970-0…}] proposing REMOVE_REPLICA((n3,s3):1): updated=[(n1,s1):2] next=3
I180819 01:26:04.942217 32157 ccl/partitionccl/partition_test.go:200  query: SELECT count(*) FROM "TIMESTAMP" WHERE a < '1970-01-08 21:12:24.000309+00:00':::TIMESTAMP
I180819 01:26:04.985242 2446 storage/store.go:3691  [n3,s3,r242/1:/Table/82/1{-/1970-0…}] added to replica GC queue (peer suggestion)
I180819 01:26:05.006963 32157 ccl/partitionccl/partition_test.go:200  query: SELECT count(*) FROM "TIMESTAMP" WHERE a = '1970-01-08 21:12:24.000309+00:00':::TIMESTAMP
I180819 01:26:05.010078 32815 storage/store.go:2586  [replicaGC,n3,s3,r242/1:/Table/82/1{-/1970-0…}] removing replica
I180819 01:26:05.038578 32815 storage/replica.go:880  [replicaGC,n3,s3,r242/1:/Table/82/1{-/1970-0…}] removed 7 (0+7) keys in 15ms [clear=15ms commit=0ms]
I180819 01:26:05.110967 32157 ccl/partitionccl/partition_test.go:200  query: SELECT count(*) FROM "TIMESTAMP" WHERE a < '1970-01-08 21:12:24.000309+00:00':::TIMESTAMP
I180819 01:26:05.202445 32157 ccl/partitionccl/partition_test.go:200  query: SELECT count(*) FROM "TIMESTAMP" WHERE a = '1970-01-08 21:12:24.000309+00:00':::TIMESTAMP
I180819 01:26:05.323999 32965 storage/replica_range_lease.go:554  [replicate,n1,s1,r245/2:/Table/82/1/1970-01-08T21:12:…] transferring lease to s3
I180819 01:26:05.330908 32157 ccl/partitionccl/partition_test.go:200  query: SELECT count(*) FROM "TIMESTAMP" WHERE a < '1970-01-08 21:12:24.000309+00:00':::TIMESTAMP
I180819 01:26:05.400929 32965 storage/replica_range_lease.go:617  [replicate,n1,s1,r245/2:/Table/82/1/1970-01-08T21:12:…] done transferring lease to s3: <nil>
I180819 01:26:05.413148 2208 storage/replica_proposal.go:214  [n3,s3,r245/1:/Table/82/1/1970-01-08T21:12:…] new range lease repl=(n3,s3):1 seq=8 start=1534641965.324173786,0 epo=1 pro=1534641965.324192986,0 following repl=(n1,s1):2 seq=7 start=1534641963.821730314,0 epo=1 pro=1534641963.821760764,0
I180819 01:26:05.488733 32157 ccl/partitionccl/partition_test.go:200  query: SELECT count(*) FROM "TIMESTAMP" WHERE a = '1970-01-08 21:12:24.000309+00:00':::TIMESTAMP
I180819 01:26:05.699588 32157 ccl/partitionccl/partition_test.go:200  query: SELECT count(*) FROM "TIMESTAMP" WHERE a < '1970-01-08 21:12:24.000309+00:00':::TIMESTAMP
I180819 01:26:05.772892 32157 ccl/partitionccl/partition_test.go:200  query: SELECT count(*) FROM "TIMESTAMP" WHERE a = '1970-01-08 21:12:24.000309+00:00':::TIMESTAMP
I180819 01:26:05.959491 32157 ccl/partitionccl/partition_test.go:200  query: SELECT count(*) FROM "TIMESTAMP" WHERE a IN ('1970-01-08 21:12:24.000309+00:00':::TIMESTAMP)
I180819 01:26:06.157561 32157 ccl/partitionccl/partition_test.go:1156  SucceedsSoon: expected to scan on n2: SELECT count(*) FROM "TIMESTAMP" WHERE a IN ('1970-01-08 21:12:24.000309+00:00':::TIMESTAMP)
[n3,s3,r245/1:/Table/82/1/1970-01-08T21:12:…] read completed
full trace:
   === SPAN START: session recording ===
  [n1,client=127.0.0.1:55592,user=root] [NoTxn pos:4296] executing ExecStmt: SELECT count(*) FROM "TIMESTAMP" WHERE a IN ('1970-01-08 21:12:24.000309+00:00':::TIMESTAMP,)
  [n1,client=127.0.0.1:55592,user=root] executing: SELECT count(*) FROM "TIMESTAMP" WHERE a IN ('1970-01-08 21:12:24.000309+00:00':::TIMESTAMP,) in state: NoTxn
   === SPAN START: sql txn ===
  [n1,client=127.0.0.1:55592,user=root] [Open pos:4296] executing ExecStmt: SELECT count(*) FROM "TIMESTAMP" WHERE a IN ('1970-01-08 21:12:24.000309+00:00':::TIMESTAMP,)
  [n1,client=127.0.0.1:55592,user=root] executing: SELECT count(*) FROM "TIMESTAMP" WHERE a IN ('1970-01-08 21:12:24.000309+00:00':::TIMESTAMP,) in state: Open
  [n1,client=127.0.0.1:55592,user=root] planning starts: SELECT
  [n1,client=127.0.0.1:55592,user=root] generating optimizer plan
  [n1,client=127.0.0.1:55592,user=root] added table 'data.public."TIMESTAMP"' to table collection
  [n1,client=127.0.0.1:55592,user=root] optimizer plan succeeded
  [n1,client=127.0.0.1:55592,user=root] planning ends
  [n1,client=127.0.0.1:55592,user=root] checking distributability
  [n1,client=127.0.0.1:55592,user=root] distributable plan: true
  [n1,client=127.0.0.1:55592,user=root] execution starts: distributed
   === SPAN START: consuming rows ===
  [n1,client=127.0.0.1:55592,user=root] creating DistSQL plan with distributedMode=true
  [n1,client=127.0.0.1:55592,user=root] querying next range at /Table/82/1/1970-01-08T21:12:24.000309Z
  [n1,client=127.0.0.1:55592,user=root] running DistSQL plan
   === SPAN START: /cockroach.sql.distsqlrun.DistSQL/SetupFlow ===
   === SPAN START: outbox ===
cockroach.stat.outbox.bytes_sent: ࠰
cockroach.streamid: 1
   === SPAN START: table reader ===
cockroach.processorid: 0
cockroach.stat.tablereader.input.rows: 0
cockroach.stat.tablereader.stalltime: 2µs
  [n3] Scan /Table/82/1/1970-01-08T21:12:24.000309Z{-/#}
   === SPAN START: [async] drain ===
   === SPAN START: flow ===
  [n1,client=127.0.0.1:55592,user=root] starting (1 processors, 0 startables)
   === SPAN START: noop ===
cockroach.processorid: 2
  [n3] Consumer sent handshake. Consuming flow scheduled: true
   === SPAN START: txn coordinator send ===
   === SPAN START: dist sender send ===
  [txn=0d2e95ea,n3] querying next range at /Table/82/1/1970-01-08T21:12:24.000309Z
  [txn=0d2e95ea,n3] r245: sending batch 1 Scan to (n3,s3):1
  [txn=0d2e95ea,n3] sending request to local server
   === SPAN START: /cockroach.roachpb.Internal/Batch ===
  [n3] 1 Scan
  [n3,s3] executing 1 requests
  [n3,s3,r245/1:/Table/82/1/1970-01-08T21:12:…] read-only path
  [n3,s3,r245/1:/Table/82/1/1970-01-08T21:12:…] command queue
  [n3,s3,r245/1:/Table/82/1/1970-01-08T21:12:…] waiting for read lock
  [n3,s3,r245/1:/Table/82/1/1970-01-08T21:12:…] read completed
   === SPAN START: ordered aggregator ===
cockroach.processorid: 1
cockroach.stat.aggregator.input.rows: 0
cockroach.stat.aggregator.mem.max: 10 KiB
cockroach.stat.aggregator.stalltime: 255µs
  [n3] accumulation complete
  [n1,client=127.0.0.1:55592,user=root] execution ends
  [n1,client=127.0.0.1:55592,user=root] rows affected: 1
   === SPAN START: txn coordinator send ===
  [n1,client=127.0.0.1:55592,user=root] AutoCommit. err: <nil>
  [n1,client=127.0.0.1:55592,user=root] releasing 1 tables
  [n1,client=127.0.0.1:55592,user=root] [NoTxn pos:4297] executing ExecStmt: SET TRACING = off
  [n1,client=127.0.0.1:55592,user=root] executing: SET TRACING = off in state: NoTxn
I180819 01:26:06.158249 32157 ccl/partitionccl/partition_test.go:200  query: SELECT count(*) FROM "TIMESTAMP" WHERE a < '1970-01-08 21:12:24.000309+00:00':::TIMESTAMP
I180819 01:26:06.200877 33113 storage/replica_command.go:792  [replicate,n2,s2,r24/4:/Table/53{-/2}] change replicas (REMOVE_REPLICA (n1,s1):3): read existing descriptor r24:/Table/53{-/2} [(n1,s1):3, (n2,s2):4, next=5, gen=842350791032]
I180819 01:26:06.240208 32157 ccl/partitionccl/partition_test.go:200  query: SELECT count(*) FROM "TIMESTAMP" WHERE a = '1970-01-08 21:12:24.000309+00:00':::TIMESTAMP
I180819 01:26:06.347834 33206 storage/replica_range_lease.go:554  [replicate,n3,s3,r246/1:/Table/82/{1/1970-…-2}] transferring lease to s1
I180819 01:26:06.352178 32157 ccl/partitionccl/partition_test.go:1156  SucceedsSoon: expected to scan on n2: SELECT count(*) FROM "TIMESTAMP" WHERE a = '1970-01-08 21:12:24.000309+00:00':::TIMESTAMP
[n3,s3,r245/1:/Table/82/1/1970-01-08T21:12:…] read completed
full trace:
   === SPAN START: session recording ===
  [n1,client=127.0.0.1:55592,user=root] [NoTxn pos:4308] executing ExecStmt: SELECT count(*) FROM "TIMESTAMP" WHERE a = '1970-01-08 21:12:24.000309+00:00':::TIMESTAMP
  [n1,client=127.0.0.1:55592,user=root] executing: SELECT count(*) FROM "TIMESTAMP" WHERE a = '1970-01-08 21:12:24.000309+00:00':::TIMESTAMP in state: NoTxn
   === SPAN START: sql txn ===
  [n1,client=127.0.0.1:55592,user=root] [Open pos:4308] executing ExecStmt: SELECT count(*) FROM "TIMESTAMP" WHERE a = '1970-01-08 21:12:24.000309+00:00':::TIMESTAMP
  [n1,client=127.0.0.1:55592,user=root] executing: SELECT count(*) FROM "TIMESTAMP" WHERE a = '1970-01-08 21:12:24.000309+00:00':::TIMESTAMP in state: Open
  [n1,client=127.0.0.1:55592,user=root] planning starts: SELECT
  [n1,client=127.0.0.1:55592,user=root] generating optimizer plan
  [n1,client=127.0.0.1:55592,user=root] added table 'data.public."TIMESTAMP"' to table collection
  [n1,client=127.0.0.1:55592,user=root] optimizer plan succeeded
  [n1,client=127.0.0.1:55592,user=root] planning ends
  [n1,client=127.0.0.1:55592,user=root] checking distributability
  [n1,client=127.0.0.1:55592,user=root] distributable plan: true
  [n1,client=127.0.0.1:55592,user=root] execution starts: distributed
   === SPAN START: consuming rows ===
  [n1,client=127.0.0.1:55592,user=root] creating DistSQL plan with distributedMode=true
  [n1,client=127.0.0.1:55592,user=root] querying next range at /Table/82/1/1970-01-08T21:12:24.000309Z
  [n1,client=127.0.0.1:55592,user=root] running DistSQL plan
   === SPAN START: /cockroach.sql.distsqlrun.DistSQL/SetupFlow ===
   === SPAN START: outbox ===
cockroach.stat.outbox.bytes_sent: ࠴
cockroach.streamid: 1
   === SPAN START: table reader ===
cockroach.processorid: 0
cockroach.stat.tablereader.input.rows: 0
cockroach.stat.tablereader.stalltime: 1µs
  [n3] Scan /Table/82/1/1970-01-08T21:12:24.000309Z{-/#}
   === SPAN START: txn coordinator send ===
   === SPAN START: dist sender send ===
  [txn=21f3efa1,n3] querying next range at /Table/82/1/1970-01-08T21:12:24.000309Z
  [txn=21f3efa1,n3] r245: sending batch 1 Scan to (n3,s3):1
  [txn=21f3efa1,n3] sending request to local server
   === SPAN START: /cockroach.roachpb.Internal/Batch ===
  [n3] 1 Scan
  [n3,s3] executing 1 requests
  [n3,s3,r245/1:/Table/82/1/1970-01-08T21:12:…] read-only path
  [n3,s3,r245/1:/Table/82/1/1970-01-08T21:12:…] command queue
  [n3,s3,r245/1:/Table/82/1/1970-01-08T21:12:…] waiting for read lock
  [n3,s3,r245/1:/Table/82/1/1970-01-08T21:12:…] read completed
   === SPAN START: flow ===
  [n1,client=127.0.0.1:55592,user=root] starting (1 processors, 0 startables)
   === SPAN START: noop ===
cockroach.processorid: 2
   === SPAN START: [async] drain ===
  [n3] Consumer sent handshake. Consuming flow scheduled: true
   === SPAN START: ordered aggregator ===
cockroach.stat.aggregator.stalltime: 370µs
cockroach.processorid: 1
cockroach.stat.aggregator.input.rows: 0
cockroach.stat.aggregator.mem.max: 10 KiB
  [n3] accumulation complete
  [n1,client=127.0.0.1:55592,user=root] execution ends
  [n1,client=127.0.0.1:55592,user=root] rows affected: 1
   === SPAN START: txn coordinator send ===
  [n1,client=127.0.0.1:55592,user=root] AutoCommit. err: <nil>
  [n1,client=127.0.0.1:55592,user=root] releasing 1 tables
  [n1,client=127.0.0.1:55592,user=root] [NoTxn pos:4309] executing ExecStmt: SET TRACING = off
  [n1,client=127.0.0.1:55592,user=root] executing: SET TRACING = off in state: NoTxn
I180819 01:26:06.377542 32157 ccl/partitionccl/partition_test.go:200  query: SELECT count(*) FROM "TIMESTAMP" WHERE a > '1970-01-08 21:12:24.000309+00:00':::TIMESTAMP
I180819 01:26:06.409505 33206 storage/replica_range_lease.go:617  [replicate,n3,s3,r246/1:/Table/82/{1/1970-…-2}] done transferring lease to s1: <nil>
I180819 01:26:06.437920 1196 storage/replica_proposal.go:214  [n1,s1,r246/2:/Table/82/{1/1970-…-2}] new range lease repl=(n1,s1):2 seq=7 start=1534641966.347993354,0 epo=1 pro=1534641966.348035838,0 following repl=(n3,s3):1 seq=6 start=1534641886.761263311,0 epo=1 pro=1534641886.761309636,0
I180819 01:26:06.600154 33113 storage/replica.go:3749  [n2,s2,r24/4:/Table/53{-/2}] proposing REMOVE_REPLICA((n1,s1):3): updated=[(n2,s2):4] next=5
I180819 01:26:06.732419 33229 storage/replica_command.go:792  [replicate,n2,s2,r87/4:/Table/6{0/2-1}] change replicas (REMOVE_REPLICA (n3,s3):3): read existing descriptor r87:/Table/6{0/2-1} [(n3,s3):3, (n2,s2):4, next=5, gen=842412291960]
I180819 01:26:06.741985 32157 ccl/partitionccl/partition_test.go:200  query: SELECT count(*) FROM "TIMESTAMP" WHERE a < '1970-01-08 21:12:24.000309+00:00':::TIMESTAMP
I180819 01:26:06.786815 2021 storage/store.go:3691  [n1,s1,r24/3:/Table/53{-/2}] added to replica GC queue (peer suggestion)
I180819 01:26:06.921894 33209 storage/store.go:2586  [replicaGC,n1,s1,r24/3:/Table/53{-/2}] removing replica
I180819 01:26:06.924042 33209 storage/replica.go:880  [replicaGC,n1,s1,r24/3:/Table/53{-/2}] removed 7 (0+7) keys in 1ms [clear=1ms commit=0ms]
I180819 01:26:06.957254 32157 ccl/partitionccl/partition_test.go:200  query: SELECT count(*) FROM "TIMESTAMP" WHERE a = '1970-01-08 21:12:24.000309+00:00':::TIMESTAMP
I180819 01:26:07.039148 33229 storage/replica.go:3749  [n2,s2,r87/4:/Table/6{0/2-1}] proposing REMOVE_REPLICA((n3,s3):3): updated=[(n2,s2):4] next=5
I180819 01:26:07.050340 1869 server/status/runtime.go:433  [n2] ru

Please assign, take a look and update the issue accordingly.

a-robinson commented 6 years ago

Dupe of https://github.com/cockroachdb/cockroach/issues/28789