cockroachdb / cockroach

CockroachDB — the cloud native, distributed SQL database designed for high availability, effortless scale, and control over data placement.
https://www.cockroachlabs.com
Other
30.01k stars 3.79k forks source link

teamcity: failed tests on master: testrace/TestRepartitioning, test/TestRepartitioning #28847

Closed cockroach-teamcity closed 6 years ago

cockroach-teamcity commented 6 years ago

The following tests appear to have failed:

#850004:

--- FAIL: testrace/TestRepartitioning (361.500s)

------- Stdout: -------
W180820 17:16:49.172091 44336 server/status/runtime.go:294  [n?] Could not parse build timestamp: parsing time "" as "2006/01/02 15:04:05": cannot parse "" as "2006"
I180820 17:16:49.251051 44336 server/server.go:830  [n?] monitoring forward clock jumps based on server.clock.forward_jump_check_enabled
I180820 17:16:49.251690 44336 base/addr_validation.go:260  [n?] server certificate addresses: IP=127.0.0.1,::1; DNS=localhost,*.local; CN=node
I180820 17:16:49.251793 44336 base/addr_validation.go:300  [n?] web UI certificate addresses: IP=127.0.0.1,::1; DNS=localhost,*.local; CN=node
I180820 17:16:49.259975 44336 server/config.go:496  [n?] 1 storage engine initialized
I180820 17:16:49.260308 44336 server/config.go:499  [n?] RocksDB cache size: 128 MiB
I180820 17:16:49.260433 44336 server/config.go:499  [n?] store 0: in-memory, size 0 B
I180820 17:16:49.364434 44336 server/node.go:373  [n?] **** cluster 09795885-e1ca-4364-96b8-98772ab65d64 has been created
I180820 17:16:49.364632 44336 server/server.go:1401  [n?] **** add additional nodes by specifying --join=127.0.0.1:40367
I180820 17:16:49.365922 44336 gossip/gossip.go:383  [n1] NodeDescriptor set to node_id:1 address:<network_field:"tcp" address_field:"127.0.0.1:40367" > attrs:<> locality:<> ServerVersion:<major_val:2 minor_val:0 patch:0 unstable:11 > build_tag:"v2.1.0-alpha.20180702-1809-g71efe4b" started_at:1534785409365455913 
I180820 17:16:49.437291 44336 server/node.go:476  [n1] initialized store [n1,s1]: disk (capacity=512 MiB, available=512 MiB, used=0 B, logicalBytes=6.9 KiB), ranges=1, leases=1, queries=0.00, writes=0.00, bytesPerReplica={p10=7103.00 p25=7103.00 p50=7103.00 p75=7103.00 p90=7103.00 pMax=7103.00}, writesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}
I180820 17:16:49.437906 44336 storage/stores.go:242  [n1] read 0 node addresses from persistent storage
I180820 17:16:49.438568 44336 server/node.go:697  [n1] connecting to gossip network to verify cluster ID...
I180820 17:16:49.438879 44336 server/node.go:722  [n1] node connected via gossip and verified as part of cluster "09795885-e1ca-4364-96b8-98772ab65d64"
I180820 17:16:49.439392 44336 server/node.go:546  [n1] node=1: started with [n1=<in-mem>] engine(s) and attributes []
I180820 17:16:49.444708 44336 server/status/recorder.go:652  [n1] available memory from cgroups (8.0 EiB) exceeds system memory 16 GiB, using system memory
I180820 17:16:49.451554 44336 server/server.go:1807  [n1] Could not start heap profiler worker due to: directory to store profiles could not be determined
I180820 17:16:49.463865 44629 server/status/recorder.go:652  [n1,summaries] available memory from cgroups (8.0 EiB) exceeds system memory 16 GiB, using system memory
I180820 17:16:49.465077 44336 server/server.go:1538  [n1] starting https server at 127.0.0.1:39437 (use: 127.0.0.1:39437)
I180820 17:16:49.468018 44336 server/server.go:1540  [n1] starting grpc/postgres server at 127.0.0.1:40367
I180820 17:16:49.468133 44336 server/server.go:1541  [n1] advertising CockroachDB node at 127.0.0.1:40367
W180820 17:16:49.468460 44336 jobs/registry.go:308  [n1] unable to get node liveness: node not in the liveness table
I180820 17:16:49.497344 44372 storage/replica_command.go:282  [split,n1,s1,r1/1:/M{in-ax}] initiating a split of this range at key /System/"" [r2]
I180820 17:16:49.764948 44618 storage/replica_command.go:282  [split,n1,s1,r2/1:/{System/-Max}] initiating a split of this range at key /System/NodeLiveness [r3]
I180820 17:16:49.799258 44394 rpc/nodedialer/nodedialer.go:92  [consistencyChecker,n1,s1,r1/1:/{Min-System/}] connection to n1 established
W180820 17:16:49.812417 44379 storage/intent_resolver.go:668  [n1,s1] failed to push during intent resolution: failed to push "sql txn" id=f01656f8 key=/Table/SystemConfigSpan/Start rw=true pri=0.04365919 iso=SERIALIZABLE stat=PENDING epo=0 ts=1534785409.574321840,0 orig=1534785409.574321840,0 max=1534785409.574321840,0 wto=false rop=false seq=6
I180820 17:16:50.003010 44416 sql/event_log.go:126  [n1,intExec=optInToDiagnosticsStatReporting] Event: "set_cluster_setting", target: 0, info: {SettingName:diagnostics.reporting.enabled Value:true User:root}
I180820 17:16:50.039482 44691 storage/replica_command.go:282  [split,n1,s1,r3/1:/{System/NodeL…-Max}] initiating a split of this range at key /System/NodeLivenessMax [r4]
I180820 17:16:50.124474 44381 storage/replica_consistency.go:129  [consistencyChecker,n1,s1,r2/1:/System/{-NodeLive…}] triggering stats recomputation to resolve delta of {ContainsEstimates:true LastUpdateNanos:1534785409778648224 IntentAge:0 GCBytesAge:0 LiveBytes:-22070 LiveCount:-461 KeyBytes:-21468 KeyCount:-461 ValBytes:-602 ValCount:-461 IntentBytes:0 IntentCount:0 SysBytes:0 SysCount:0}
I180820 17:16:50.337450 44679 storage/replica_command.go:282  [split,n1,s1,r4/1:/{System/NodeL…-Max}] initiating a split of this range at key /System/tsd [r5]
I180820 17:16:50.516361 44726 storage/replica_command.go:282  [split,n1,s1,r5/1:/{System/tsd-Max}] initiating a split of this range at key /System/"tse" [r6]
I180820 17:16:50.789292 44624 sql/event_log.go:126  [n1,intExec=set-setting] Event: "set_cluster_setting", target: 0, info: {SettingName:version Value:$1 User:root}
I180820 17:16:50.841579 44665 storage/replica_command.go:282  [split,n1,s1,r6/1:/{System/tse-Max}] initiating a split of this range at key /Table/SystemConfigSpan/Start [r7]
I180820 17:16:51.025028 44732 storage/replica_command.go:282  [split,n1,s1,r7/1:/{Table/System…-Max}] initiating a split of this range at key /Table/11 [r8]
I180820 17:16:51.131967 44730 sql/event_log.go:126  [n1,intExec=disableNetTrace] Event: "set_cluster_setting", target: 0, info: {SettingName:trace.debug.enable Value:false User:root}
I180820 17:16:51.191612 44745 storage/replica_command.go:282  [split,n1,s1,r8/1:/{Table/11-Max}] initiating a split of this range at key /Table/12 [r9]
I180820 17:16:51.385011 44759 storage/replica_command.go:282  [split,n1,s1,r9/1:/{Table/12-Max}] initiating a split of this range at key /Table/13 [r10]
I180820 17:16:51.497266 44819 storage/replica_command.go:282  [split,n1,s1,r10/1:/{Table/13-Max}] initiating a split of this range at key /Table/14 [r11]
I180820 17:16:51.564892 44790 sql/event_log.go:126  [n1,intExec=initializeClusterSecret] Event: "set_cluster_setting", target: 0, info: {SettingName:cluster.secret Value:gen_random_uuid()::STRING User:root}
I180820 17:16:51.662166 44822 storage/replica_command.go:282  [split,n1,s1,r11/1:/{Table/14-Max}] initiating a split of this range at key /Table/15 [r12]
I180820 17:16:51.742388 44809 storage/replica_command.go:282  [split,n1,s1,r12/1:/{Table/15-Max}] initiating a split of this range at key /Table/16 [r13]
I180820 17:16:51.822468 44781 sql/event_log.go:126  [n1,intExec=create-default-db] Event: "create_database", target: 50, info: {DatabaseName:defaultdb Statement:CREATE DATABASE IF NOT EXISTS defaultdb User:root}
I180820 17:16:51.917515 44811 storage/replica_command.go:282  [split,n1,s1,r13/1:/{Table/16-Max}] initiating a split of this range at key /Table/17 [r14]
I180820 17:16:51.964682 44786 sql/event_log.go:126  [n1,intExec=create-default-db] Event: "create_database", target: 51, info: {DatabaseName:postgres Statement:CREATE DATABASE IF NOT EXISTS postgres User:root}
I180820 17:16:52.044391 44336 server/server.go:1594  [n1] done ensuring all necessary migrations have run
I180820 17:16:52.045299 44336 server/server.go:1597  [n1] serving sql connections
I180820 17:16:52.054900 44902 storage/replica_command.go:282  [split,n1,s1,r14/1:/{Table/17-Max}] initiating a split of this range at key /Table/18 [r15]
I180820 17:16:52.228815 44919 server/server_update.go:67  [n1] no need to upgrade, cluster already at the newest version
I180820 17:16:52.250832 44921 sql/event_log.go:126  [n1] Event: "node_join", target: 1, info: {Descriptor:{NodeID:1 Address:{NetworkField:tcp AddressField:127.0.0.1:40367} Attrs: Locality: ServerVersion:2.0-11 BuildTag:v2.1.0-alpha.20180702-1809-g71efe4b StartedAt:1534785409365455913 LocalityAddress:[]} ClusterID:09795885-e1ca-4364-96b8-98772ab65d64 StartedAt:1534785409365455913 LastUp:1534785409365455913}
I180820 17:16:52.277806 44875 storage/replica_command.go:282  [split,n1,s1,r15/1:/{Table/18-Max}] initiating a split of this range at key /Table/19 [r16]
I180820 17:16:52.402837 44829 storage/replica_command.go:282  [split,n1,s1,r16/1:/{Table/19-Max}] initiating a split of this range at key /Table/20 [r17]
I180820 17:16:52.486569 44909 storage/replica_command.go:282  [split,n1,s1,r17/1:/{Table/20-Max}] initiating a split of this range at key /Table/21 [r18]
I180820 17:16:52.573334 44980 storage/replica_command.go:282  [split,n1,s1,r18/1:/{Table/21-Max}] initiating a split of this range at key /Table/22 [r19]
I180820 17:16:52.707343 45016 storage/replica_command.go:282  [split,n1,s1,r19/1:/{Table/22-Max}] initiating a split of this range at key /Table/23 [r20]
I180820 17:16:52.790592 45020 storage/replica_command.go:282  [split,n1,s1,r20/1:/{Table/23-Max}] initiating a split of this range at key /Table/50 [r21]
I180820 17:16:52.906024 45025 storage/replica_command.go:282  [split,n1,s1,r21/1:/{Table/50-Max}] initiating a split of this range at key /Table/51 [r22]
W180820 17:16:53.310965 44336 server/status/runtime.go:294  [n?] Could not parse build timestamp: parsing time "" as "2006/01/02 15:04:05": cannot parse "" as "2006"
I180820 17:16:53.458464 44336 server/server.go:830  [n?] monitoring forward clock jumps based on server.clock.forward_jump_check_enabled
I180820 17:16:53.460796 44336 base/addr_validation.go:260  [n?] server certificate addresses: IP=127.0.0.1,::1; DNS=localhost,*.local; CN=node
I180820 17:16:53.460969 44336 base/addr_validation.go:300  [n?] web UI certificate addresses: IP=127.0.0.1,::1; DNS=localhost,*.local; CN=node
I180820 17:16:53.511273 44336 server/config.go:496  [n?] 1 storage engine initialized
I180820 17:16:53.511415 44336 server/config.go:499  [n?] RocksDB cache size: 128 MiB
I180820 17:16:53.511453 44336 server/config.go:499  [n?] store 0: in-memory, size 0 B
W180820 17:16:53.511732 44336 gossip/gossip.go:1351  [n?] no incoming or outgoing connections
I180820 17:16:53.572430 44336 server/server.go:1403  [n?] no stores bootstrapped and --join flag specified, awaiting init command.
I180820 17:16:53.756970 45065 gossip/client.go:129  [n?] started gossip client to 127.0.0.1:40367
I180820 17:16:53.760644 45050 gossip/server.go:219  [n1] received initial cluster-verification connection from {tcp 127.0.0.1:34721}
I180820 17:16:53.776882 44336 server/node.go:697  [n?] connecting to gossip network to verify cluster ID...
I180820 17:16:53.777241 44336 server/node.go:722  [n?] node connected via gossip and verified as part of cluster "09795885-e1ca-4364-96b8-98772ab65d64"
I180820 17:16:53.780503 45157 kv/dist_sender.go:345  [n?] unable to determine this node's attributes for replica selection; node is most likely bootstrapping
I180820 17:16:53.787364 45156 kv/dist_sender.go:345  [n?] unable to determine this node's attributes for replica selection; node is most likely bootstrapping
I180820 17:16:53.795616 44336 kv/dist_sender.go:345  [n?] unable to determine this node's attributes for replica selection; node is most likely bootstrapping
I180820 17:16:53.806710 44336 server/node.go:428  [n?] new node allocated ID 2
I180820 17:16:53.807475 44336 gossip/gossip.go:383  [n2] NodeDescriptor set to node_id:2 address:<network_field:"tcp" address_field:"127.0.0.1:34721" > attrs:<> locality:<> ServerVersion:<major_val:2 minor_val:0 patch:0 unstable:11 > build_tag:"v2.1.0-alpha.20180702-1809-g71efe4b" started_at:1534785413806916306 
I180820 17:16:53.808527 44336 storage/stores.go:242  [n2] read 0 node addresses from persistent storage
I180820 17:16:53.808830 44336 storage/stores.go:261  [n2] wrote 1 node addresses to persistent storage
I180820 17:16:53.828563 45040 storage/stores.go:261  [n1] wrote 1 node addresses to persistent storage
I180820 17:16:53.863682 44336 server/node.go:672  [n2] bootstrapped store [n2,s2]
I180820 17:16:53.867962 44336 server/node.go:546  [n2] node=2: started with [] engine(s) and attributes []
I180820 17:16:53.880839 44336 server/status/recorder.go:652  [n2] available memory from cgroups (8.0 EiB) exceeds system memory 16 GiB, using system memory
I180820 17:16:53.881033 44336 server/server.go:1807  [n2] Could not start heap profiler worker due to: directory to store profiles could not be determined
I180820 17:16:53.892781 45271 server/status/recorder.go:652  [n2,summaries] available memory from cgroups (8.0 EiB) exceeds system memory 16 GiB, using system memory
I180820 17:16:53.907053 44336 server/server.go:1538  [n2] starting https server at 127.0.0.1:36747 (use: 127.0.0.1:36747)
I180820 17:16:53.907332 44336 server/server.go:1540  [n2] starting grpc/postgres server at 127.0.0.1:34721
I180820 17:16:53.907411 44336 server/server.go:1541  [n2] advertising CockroachDB node at 127.0.0.1:34721
I180820 17:16:53.935565 45265 storage/replica_raftstorage.go:538  [replicate,n1,s1,r21/1:/Table/5{0-1}] generated preemptive snapshot 66b2bcbf at index 18
I180820 17:16:53.963233 44336 server/server.go:1594  [n2] done ensuring all necessary migrations have run
I180820 17:16:53.963526 44336 server/server.go:1597  [n2] serving sql connections
I180820 17:16:54.053353 45160 rpc/nodedialer/nodedialer.go:92  [ct-client] connection to n1 established
I180820 17:16:54.207471 45265 rpc/nodedialer/nodedialer.go:92  [replicate,n1,s1,r21/1:/Table/5{0-1}] connection to n2 established
I180820 17:16:54.207673 45282 rpc/nodedialer/nodedialer.go:92  [ct-client] connection to n2 established
I180820 17:16:54.211738 45265 storage/store_snapshot.go:655  [replicate,n1,s1,r21/1:/Table/5{0-1}] streamed snapshot to (n2,s2):?: kv pairs: 8, log entries: 8, rate-limit: 2.0 MiB/sec, 4ms
I180820 17:16:54.215387 45302 storage/replica_raftstorage.go:782  [n2,s2,r21/?:{-}] applying preemptive snapshot at index 18 (id=66b2bcbf, encoded size=2220, 1 rocksdb batches, 8 log entries)
I180820 17:16:54.217969 45302 storage/replica_raftstorage.go:788  [n2,s2,r21/?:/Table/5{0-1}] applied preemptive snapshot in 2ms [clear=0ms batch=0ms entries=1ms commit=0ms]
I180820 17:16:54.226371 45265 storage/replica_command.go:792  [replicate,n1,s1,r21/1:/Table/5{0-1}] change replicas (ADD_REPLICA (n2,s2):2): read existing descriptor r21:/Table/5{0-1} [(n1,s1):1, next=2, gen=842350792952]
I180820 17:16:54.325808 45101 server/server_update.go:67  [n2] no need to upgrade, cluster already at the newest version
I180820 17:16:54.353749 45265 storage/replica.go:3749  [n1,s1,r21/1:/Table/5{0-1}] proposing ADD_REPLICA((n2,s2):2): updated=[(n1,s1):1 (n2,s2):2] next=3
I180820 17:16:54.394604 45103 sql/event_log.go:126  [n2] Event: "node_join", target: 2, info: {Descriptor:{NodeID:2 Address:{NetworkField:tcp AddressField:127.0.0.1:34721} Attrs: Locality: ServerVersion:2.0-11 BuildTag:v2.1.0-alpha.20180702-1809-g71efe4b StartedAt:1534785413806916306 LocalityAddress:[]} ClusterID:09795885-e1ca-4364-96b8-98772ab65d64 StartedAt:1534785413806916306 LastUp:1534785413806916306}
W180820 17:16:54.403143 44336 server/status/runtime.go:294  [n?] Could not parse build timestamp: parsing time "" as "2006/01/02 15:04:05": cannot parse "" as "2006"
I180820 17:16:54.415690 45340 storage/replica_range_lease.go:554  [replicate,n1,s1,r21/1:/Table/5{0-1}] transferring lease to s2
I180820 17:16:54.441247 45340 storage/replica_range_lease.go:617  [replicate,n1,s1,r21/1:/Table/5{0-1}] done transferring lease to s2: <nil>
I180820 17:16:54.443986 45344 storage/replica_raftstorage.go:538  [replicate,n1,s1,r16/1:/Table/{19-20}] generated preemptive snapshot 180a192e at index 18
I180820 17:16:54.455215 45148 storage/replica_proposal.go:214  [n2,s2,r21/2:/Table/5{0-1}] new range lease repl=(n2,s2):2 seq=3 start=1534785414.415870191,0 epo=1 pro=1534785414.415899271,0 following repl=(n1,s1):1 seq=2 start=1534785409.377846354,0 exp=1534785418.413775468,0 pro=1534785409.413835387,0
I180820 17:16:54.462155 45344 storage/store_snapshot.go:655  [replicate,n1,s1,r16/1:/Table/{19-20}] streamed snapshot to (n2,s2):?: kv pairs: 8, log entries: 8, rate-limit: 2.0 MiB/sec, 17ms
I180820 17:16:54.465906 45383 storage/replica_raftstorage.go:782  [n2,s2,r16/?:{-}] applying preemptive snapshot at index 18 (id=180a192e, encoded size=2418, 1 rocksdb batches, 8 log entries)
I180820 17:16:54.507545 45295 storage/replica_command.go:792  [replicate,n2,s2,r21/2:/Table/5{0-1}] change replicas (REMOVE_REPLICA (n1,s1):1): read existing descriptor r21:/Table/5{0-1} [(n1,s1):1, (n2,s2):2, next=3, gen=842355435112]
I180820 17:16:54.509439 45383 storage/replica_raftstorage.go:788  [n2,s2,r16/?:/Table/{19-20}] applied preemptive snapshot in 43ms [clear=0ms batch=0ms entries=14ms commit=7ms]
I180820 17:16:54.550325 45344 storage/replica_command.go:792  [replicate,n1,s1,r16/1:/Table/{19-20}] change replicas (ADD_REPLICA (n2,s2):2): read existing descriptor r16:/Table/{19-20} [(n1,s1):1, next=2, gen=842353524912]
I180820 17:16:54.606921 45387 storage/store.go:2612  [replicaGC,n2,s2,r16/?:/Table/{19-20}] removing replica
I180820 17:16:54.608379 45387 storage/replica.go:880  [replicaGC,n2,s2,r16/?:/Table/{19-20}] removed 6 (0+6) keys in 1ms [clear=0ms commit=0ms]
I180820 17:16:54.617552 44336 server/server.go:830  [n?] monitoring forward clock jumps based on server.clock.forward_jump_check_enabled
I180820 17:16:54.620472 44336 base/addr_validation.go:260  [n?] server certificate addresses: IP=127.0.0.1,::1; DNS=localhost,*.local; CN=node
I180820 17:16:54.620577 44336 base/addr_validation.go:300  [n?] web UI certificate addresses: IP=127.0.0.1,::1; DNS=localhost,*.local; CN=node
I180820 17:16:54.685834 44336 server/config.go:496  [n?] 1 storage engine initialized
I180820 17:16:54.685993 44336 server/config.go:499  [n?] RocksDB cache size: 128 MiB
I180820 17:16:54.686113 44336 server/config.go:499  [n?] store 0: in-memory, size 0 B
W180820 17:16:54.686484 44336 gossip/gossip.go:1351  [n?] no incoming or outgoing connections
I180820 17:16:54.690760 44336 server/server.go:1403  [n?] no stores bootstrapped and --join flag specified, awaiting init command.
I180820 17:16:54.734066 45295 storage/replica.go:3749  [n2,s2,r21/2:/Table/5{0-1}] proposing REMOVE_REPLICA((n1,s1):1): updated=[(n2,s2):2] next=3
I180820 17:16:54.742605 45344 storage/replica.go:3749  [n1,s1,r16/1:/Table/{19-20}] proposing ADD_REPLICA((n2,s2):2): updated=[(n1,s1):1 (n2,s2):2] next=3
I180820 17:16:54.764205 45381 storage/store.go:3717  [n1,s1,r21/1:/Table/5{0-1}] added to replica GC queue (peer suggestion)
I180820 17:16:54.852589 45483 storage/replica_raftstorage.go:538  [raftsnapshot,n1,s1,r16/1:/Table/{19-20}] generated Raft snapshot 618ed167 at index 20
I180820 17:16:54.853685 45461 storage/store.go:2612  [replicaGC,n1,s1,r21/1:/Table/5{0-1}] removing replica
I180820 17:16:54.855653 45461 storage/replica.go:880  [replicaGC,n1,s1,r21/1:/Table/5{0-1}] removed 7 (0+7) keys in 1ms [clear=0ms commit=0ms]
I180820 17:16:54.857986 45483 storage/store_snapshot.go:655  [raftsnapshot,n1,s1,r16/1:/Table/{19-20}] streamed snapshot to (n2,s2):2: kv pairs: 10, log entries: 10, rate-limit: 8.0 MiB/sec, 4ms
I180820 17:16:54.859232 45525 storage/replica_raftstorage.go:782  [n2,s2,r16/2:{-}] applying Raft snapshot at index 20 (id=618ed167, encoded size=3783, 1 rocksdb batches, 10 log entries)
I180820 17:16:54.865415 45525 storage/replica_raftstorage.go:788  [n2,s2,r16/2:/Table/{19-20}] applied Raft snapshot in 6ms [clear=0ms batch=0ms entries=2ms commit=3ms]
I180820 17:16:54.881973 45489 storage/replica_command.go:792  [replicate,n1,s1,r16/1:/Table/{19-20}] change replicas (REMOVE_REPLICA (n2,s2):2): read existing descriptor r16:/Table/{19-20} [(n1,s1):1, (n2,s2):2, next=3, gen=842354989832]
I180820 17:16:54.989377 45391 gossip/client.go:129  [n?] started gossip client to 127.0.0.1:40367
I180820 17:16:54.992027 45543 gossip/server.go:219  [n1] received initial cluster-verification connection from {tcp 127.0.0.1:44621}
I180820 17:16:55.015145 44336 server/node.go:697  [n?] connecting to gossip network to verify cluster ID...
I180820 17:16:55.015481 44336 server/node.go:722  [n?] node connected via gossip and verified as part of cluster "09795885-e1ca-4364-96b8-98772ab65d64"
I180820 17:16:55.018763 45468 kv/dist_sender.go:345  [n?] unable to determine this node's attributes for replica selection; node is most likely bootstrapping
I180820 17:16:55.022302 45489 storage/replica.go:3749  [n1,s1,r16/1:/Table/{19-20}] proposing REMOVE_REPLICA((n2,s2):2): updated=[(n1,s1):1] next=3
I180820 17:16:55.027269 45467 kv/dist_sender.go:345  [n?] unable to determine this node's attributes for replica selection; node is most likely bootstrapping
I180820 17:16:55.043546 45311 storage/store.go:3717  [n2,s2,r16/2:/Table/{19-20}] added to replica GC queue (peer suggestion)
I180820 17:16:55.054915 44336 kv/dist_sender.go:345  [n?] unable to determine this node's attributes for replica selection; node is most likely bootstrapping
I180820 17:16:55.061067 45496 storage/replica_raftstorage.go:538  [replicate,n1,s1,r14/1:/Table/1{7-8}] generated preemptive snapshot e4e8eb0b at index 18
I180820 17:16:55.067811 45496 storage/store_snapshot.go:655  [replicate,n1,s1,r14/1:/Table/1{7-8}] streamed snapshot to (n2,s2):?: kv pairs: 8, log entries: 8, rate-limit: 2.0 MiB/sec, 6ms
I180820 17:16:55.070014 45499 storage/replica_raftstorage.go:782  [n2,s2,r14/?:{-}] applying preemptive snapshot at index 18 (id=e4e8eb0b, encoded size=2414, 1 rocksdb batches, 8 log entries)
I180820 17:16:55.072485 45499 storage/replica_raftstorage.go:788  [n2,s2,r14/?:/Table/1{7-8}] applied preemptive snapshot in 2ms [clear=0ms batch=0ms entries=1ms commit=0ms]
I180820 17:16:55.079251 45496 storage/replica_command.go:792  [replicate,n1,s1,r14/1:/Table/1{7-8}] change replicas (ADD_REPLICA (n2,s2):2): read existing descriptor r14:/Table/1{7-8} [(n1,s1):1, next=2, gen=842353180144]
I180820 17:16:55.081995 45494 storage/store.go:2612  [replicaGC,n2,s2,r16/2:/Table/{19-20}] removing replica
I180820 17:16:55.083441 45494 storage/replica.go:880  [replicaGC,n2,s2,r16/2:/Table/{19-20}] removed 7 (0+7) keys in 1ms [clear=0ms commit=0ms]
I180820 17:16:55.086938 44336 server/node.go:428  [n?] new node allocated ID 3
I180820 17:16:55.087682 44336 gossip/gossip.go:383  [n3] NodeDescriptor set to node_id:3 address:<network_field:"tcp" address_field:"127.0.0.1:44621" > attrs:<> locality:<> ServerVersion:<major_val:2 minor_val:0 patch:0 unstable:11 > build_tag:"v2.1.0-alpha.20180702-1809-g71efe4b" started_at:1534785415087245050 
I180820 17:16:55.088720 44336 storage/stores.go:242  [n3] read 0 node addresses from persistent storage
I180820 17:16:55.089024 44336 storage/stores.go:261  [n3] wrote 2 node addresses to persistent storage
I180820 17:16:55.098583 45587 storage/stores.go:261  [n1] wrote 2 node addresses to persistent storage
I180820 17:16:55.101324 45575 storage/stores.go:261  [n2] wrote 2 node addresses to persistent storage
I180820 17:16:55.151369 44336 server/node.go:672  [n3] bootstrapped store [n3,s3]
I180820 17:16:55.160168 44336 server/node.go:546  [n3] node=3: started with [] engine(s) and attributes []
I180820 17:16:55.180786 44336 server/status/recorder.go:652  [n3] available memory from cgroups (8.0 EiB) exceeds system memory 16 GiB, using system memory
I180820 17:16:55.180920 45496 storage/replica.go:3749  [n1,s1,r14/1:/Table/1{7-8}] proposing ADD_REPLICA((n2,s2):2): updated=[(n1,s1):1 (n2,s2):2] next=3
I180820 17:16:55.180971 44336 server/server.go:1807  [n3] Could not start heap profiler worker due to: directory to store profiles could not be determined
I180820 17:16:55.191028 44336 server/server.go:1538  [n3] starting https server at 127.0.0.1:37391 (use: 127.0.0.1:37391)
I180820 17:16:55.191201 44336 server/server.go:1540  [n3] starting grpc/postgres server at 127.0.0.1:44621
I180820 17:16:55.191277 44336 server/server.go:1541  [n3] advertising CockroachDB node at 127.0.0.1:44621
W180820 17:16:55.191569 44336 jobs/registry.go:308  [n3] unable to get node liveness: node not in the liveness table
I180820 17:16:55.193200 45548 server/status/recorder.go:652  [n3,summaries] available memory from cgroups (8.0 EiB) exceeds system memory 16 GiB, using system memory
I180820 17:16:55.267100 44336 server/server.go:1594  [n3] done ensuring all necessary migrations have run
I180820 17:16:55.267334 44336 server/server.go:1597  [n3] serving sql connections
I180820 17:16:55.348886 45705 rpc/nodedialer/nodedialer.go:92  [ct-client] connection to n1 established
I180820 17:16:55.426304 45769 storage/replica_range_lease.go:554  [replicate,n1,s1,r14/1:/Table/1{7-8}] transferring lease to s2
I180820 17:16:55.439613 45769 storage/replica_range_lease.go:617  [replicate,n1,s1,r14/1:/Table/1{7-8}] done transferring lease to s2: <nil>
I180820 17:16:55.443580 45773 storage/replica_raftstorage.go:538  [replicate,n1,s1,r13/1:/Table/1{6-7}] generated preemptive snapshot 6de5750e at index 18
I180820 17:16:55.457567 45221 storage/replica_proposal.go:214  [n2,s2,r14/2:/Table/1{7-8}] new range lease repl=(n2,s2):2 seq=3 start=1534785415.426478928,0 epo=1 pro=1534785415.426494543,0 following repl=(n1,s1):1 seq=2 start=1534785409.377846354,0 exp=1534785418.413775468,0 pro=1534785409.413835387,0
E180820 17:16:55.479952 45726 storage/queue.go:788  [replicate,n2,s2,r14/2:/Table/1{7-8}] no removable replicas from range that needs a removal: [raft progress unknown]
I180820 17:16:55.593882 45565 server/server_update.go:67  [n3] no need to upgrade, cluster already at the newest version
I180820 17:16:55.636041 45798 storage/replica_command.go:792  [replicate,n2,s2,r14/2:/Table/1{7-8}] change replicas (REMOVE_REPLICA (n1,s1):1): read existing descriptor r14:/Table/1{7-8} [(n1,s1):1, (n2,s2):2, next=3, gen=842354373544]
I180820 17:16:55.670719 45706 rpc/nodedialer/nodedialer.go:92  [ct-client] connection to n2 established
I180820 17:16:55.764571 45567 sql/event_log.go:126  [n3] Event: "node_join", target: 3, info: {Descriptor:{NodeID:3 Address:{NetworkField:tcp AddressField:127.0.0.1:44621} Attrs: Locality: ServerVersion:2.0-11 BuildTag:v2.1.0-alpha.20180702-1809-g71efe4b StartedAt:1534785415087245050 LocalityAddress:[]} ClusterID:09795885-e1ca-4364-96b8-98772ab65d64 StartedAt:1534785415087245050 LastUp:1534785415087245050}
I180820 17:16:55.795054 45773 rpc/nodedialer/nodedialer.go:92  [replicate,n1,s1,r13/1:/Table/1{6-7}] connection to n3 established
I180820 17:16:55.796611 45763 rpc/nodedialer/nodedialer.go:92  [ct-client] connection to n3 established
I180820 17:16:55.866007 45773 storage/store_snapshot.go:655  [replicate,n1,s1,r13/1:/Table/1{6-7}] streamed snapshot to (n3,s3):?: kv pairs: 8, log entries: 8, rate-limit: 2.0 MiB/sec, 65ms
I180820 17:16:55.871897 45834 storage/replica_raftstorage.go:782  [n3,s3,r13/?:{-}] applying preemptive snapshot at index 18 (id=6de5750e, encoded size=2247, 1 rocksdb batches, 8 log entries)
I180820 17:16:55.874730 45834 storage/replica_raftstorage.go:788  [n3,s3,r13/?:/Table/1{6-7}] applied preemptive snapshot in 3ms [clear=0ms batch=0ms entries=1ms commit=0ms]
I180820 17:16:55.900755 45773 storage/replica_command.go:792  [replicate,n1,s1,r13/1:/Table/1{6-7}] change replicas (ADD_REPLICA (n3,s3):2): read existing descriptor r13:/Table/1{6-7} [(n1,s1):1, next=2, gen=842417770768]
I180820 17:16:56.026735 45798 storage/replica.go:3749  [n2,s2,r14/2:/Table/1{7-8}] proposing REMOVE_REPLICA((n1,s1):1): updated=[(n2,s2):2] next=3
I180820 17:16:56.042665 45381 storage/store.go:3717  [n1,s1,r14/1:/Table/1{7-8}] added to replica GC queue (peer suggestion)
I180820 17:16:56.059990 45773 storage/replica.go:3749  [n1,s1,r13/1:/Table/1{6-7}] proposing ADD_REPLICA((n3,s3):2): updated=[(n1,s1):1 (n3,s3):2] next=3
I180820 17:16:56.074888 45803 rpc/nodedialer/nodedialer.go:92  [ct-client] connection to n3 established
I180820 17:16:56.091324 45822 storage/store.go:2612  [replicaGC,n1,s1,r14/1:/Table/1{7-8}] removing replica
I180820 17:16:56.104024 45788 sql/event_log.go:126  [n1,client=127.0.0.1:44808,user=root] Event: "create_database", target: 52, info: {DatabaseName:data Statement:CREATE DATABASE data User:root}
I180820 17:16:56.106254 45822 storage/replica.go:880  [replicaGC,n1,s1,r14/1:/Table/1{7-8}] removed 7 (0+7) keys in 1ms [clear=0ms commit=0ms]
I180820 17:16:56.111492 45850 storage/replica_command.go:282  [split,n1,s1,r22/1:/{Table/51-Max}] initiating a split of this range at key /Table/52 [r23]
I180820 17:16:56.196163 45884 storage/replica_range_lease.go:554  [replicate,n1,s1,r13/1:/Table/1{6-7}] transferring lease to s3
I180820 17:16:56.218839 45884 storage/replica_range_lease.go:617  [replicate,n1,s1,r13/1:/Table/1{6-7}] done transferring lease to s3: <nil>
I180820 17:16:56.224693 45599 storage/replica_proposal.go:214  [n3,s3,r13/2:/Table/1{6-7}] new range lease repl=(n3,s3):2 seq=3 start=1534785416.196412098,0 epo=1 pro=1534785416.196431287,0 following repl=(n1,s1):1 seq=2 start=1534785409.377846354,0 exp=1534785418.413775468,0 pro=1534785409.413835387,0
I180820 17:16:56.229937 45899 storage/replica_raftstorage.go:538  [replicate,n1,s1,r4/1:/System/{NodeLive…-tsd}] generated preemptive snapshot acc81720 at index 41
I180820 17:16:56.287192 45899 storage/store_snapshot.go:655  [replicate,n1,s1,r4/1:/System/{NodeLive…-tsd}] streamed snapshot to (n3,s3):?: kv pairs: 39, log entries: 31, rate-limit: 2.0 MiB/sec, 50ms
I180820 17:16:56.291611 45911 storage/replica_raftstorage.go:782  [n3,s3,r4/?:{-}] applying preemptive snapshot at index 41 (id=acc81720, encoded size=98590, 1 rocksdb batches, 31 log entries)
I180820 17:16:56.331982 45911 storage/replica_raftstorage.go:788  [n3,s3,r4/?:/System/{NodeLive…-tsd}] applied preemptive snapshot in 40ms [clear=0ms batch=0ms entries=15ms commit=23ms]
I180820 17:16:56.341261 45899 storage/replica_command.go:792  [replicate,n1,s1,r4/1:/System/{NodeLive…-tsd}] change replicas (ADD_REPLICA (n3,s3):2): read existing descriptor r4:/System/{NodeLivenessMax-tsd} [(n1,s1):1, next=2, gen=842353180064]
I180820 17:16:56.343695 45778 storage/replica_command.go:792  [replicate,n3,s3,r13/2:/Table/1{6-7}] change replicas (REMOVE_REPLICA (n1,s1):1): read existing descriptor r13:/Table/1{6-7} [(n1,s1):1, (n3,s3):2, next=3, gen=842354016608]
I180820 17:16:56.442480 45788 sql/event_log.go:126  [n1,client=127.0.0.1:44808,user=root] Event: "set_cluster_setting", target: 0, info: {SettingName:server.declined_reservation_timeout Value:0s User:root}
I180820 17:16:56.471237 45987 storage/store.go:2612  [replicaGC,n3,s3,r4/?:/System/{NodeLive…-tsd}] removing replica
I180820 17:16:56.472939 45987 storage/replica.go:880  [replicaGC,n3,s3,r4/?:/System/{NodeLive…-tsd}] removed 27 (20+7) keys in 1ms [clear=1ms commit=0ms]
I180820 17:16:56.539638 45899 storage/replica.go:3749  [n1,s1,r4/1:/System/{NodeLive…-tsd}] proposing ADD_REPLICA((n3,s3):2): updated=[(n1,s1):1 (n3,s3):2] next=3
I180820 17:16:56.719919 45888 storage/replica_raftstorage.go:538  [raftsnapshot,n1,s1,r4/1:/System/{NodeLive…-tsd}] generated Raft snapshot 90505a86 at index 43
I180820 17:16:56.727642 45888 storage/store_snapshot.go:655  [raftsnapshot,n1,s1,r4/1:/System/{NodeLive…-tsd}] streamed snapshot to (n3,s3):2: kv pairs: 41, log entries: 33, rate-limit: 8.0 MiB/sec, 7ms
I180820 17:16:56.766715 46019 storage/replica_raftstorage.go:782  [n3,s3,r4/2:{-}] applying Raft snapshot at index 43 (id=90505a86, encoded size=100156, 1 rocksdb batches, 33 log entries)
I180820 17:16:56.796181 45778 storage/replica.go:3749  [n3,s3,r13/2:/Table/1{6-7}] proposing REMOVE_REPLICA((n1,s1):1): updated=[(n3,s3):2] next=3
I180820 17:16:56.803626 46019 storage/replica_raftstorage.go:788  [n3,s3,r4/2:/System/{NodeLive…-tsd}] applied Raft snapshot in 37ms [clear=0ms batch=0ms entries=23ms commit=12ms]
I180820 17:16:56.819037 46035 storage/replica_command.go:792  [replicate,n1,s1,r4/1:/System/{NodeLive…-tsd}] change replicas (REMOVE_REPLICA (n3,s3):2): read existing descriptor r4:/System/{NodeLivenessMax-tsd} [(n1,s1):1, (n3,s3):2, next=3, gen=842403146072]
I180820 17:16:56.826374 45863 storage/store.go:3717  [n1,s1,r13/1:/Table/1{6-7}] added to replica GC queue (peer suggestion)
I180820 17:16:56.867787 45788 sql/event_log.go:126  [n1,client=127.0.0.1:44808,user=root] Event: "set_cluster_setting", target: 0, info: {SettingName:server.failed_reservation_timeout Value:0s User:root}
I180820 17:16:56.923020 46036 storage/store.go:2612  [replicaGC,n1,s1,r13/1:/Table/1{6-7}] removing replica
I180820 17:16:56.937476 46035 storage/replica.go:3749  [n1,s1,r4/1:/System/{NodeLive…-tsd}] proposing REMOVE_REPLICA((n3,s3):2): updated=[(n1,s1):1] next=3
I180820 17:16:56.939515 46036 storage/replica.go:880  [replicaGC,n1,s1,r13/1:/Table/1{6-7}] removed 7 (0+7) keys in 1ms [clear=0ms commit=0ms]
--- FAIL: testrace/TestRepartitioning: TestRepartitioning/secondary_index_-_list_partitioning/secondary_index_-_list_partitioning_-_DEFAULT (55.950s)
soon.go:49: condition failed to evaluate within 45s: expected to scan on n2: SELECT count(*) FROM "secondary index - list partitioning - DEFAULT" WHERE b = 4
    [n1,s1,r7/1:/Table/{SystemCon…-11}] read completed
    full trace:
       === SPAN START: session recording ===
      [n1,client=127.0.0.1:44808,user=root] [NoTxn pos:7089] executing ExecStmt: SELECT count(*) FROM "secondary index - list partitioning - DEFAULT" WHERE b = 4
      [n1,client=127.0.0.1:44808,user=root] executing: SELECT count(*) FROM "secondary index - list partitioning - DEFAULT" WHERE b = 4 in state: NoTxn
       === SPAN START: sql txn ===
      [n1,client=127.0.0.1:44808,user=root] [Open pos:7089] executing ExecStmt: SELECT count(*) FROM "secondary index - list partitioning - DEFAULT" WHERE b = 4
      [n1,client=127.0.0.1:44808,user=root] executing: SELECT count(*) FROM "secondary index - list partitioning - DEFAULT" WHERE b = 4 in state: Open
      [n1,client=127.0.0.1:44808,user=root] planning starts: SELECT
      [n1,client=127.0.0.1:44808,user=root] generating optimizer plan
       === SPAN START: txn coordinator send ===
       === SPAN START: dist sender send ===
      [client=127.0.0.1:44808,user=root,txn=e1aea833,n1] querying next range at /Table/2/1/103/"secondary index - list partitioning - DEFAULT"/3/1
      [client=127.0.0.1:44808,user=root,txn=e1aea833,n1] r7: sending batch 1 Get to (n1,s1):1
      [client=127.0.0.1:44808,user=root,txn=e1aea833,n1] sending request to local client
       === SPAN START: /cockroach.roachpb.Internal/Batch ===
      [n1] 1 Get
      [n1,s1] executing 1 requests
      [n1,s1,r7/1:/Table/{SystemCon…-11}] read-only path
      [n1,s1,r7/1:/Table/{SystemCon…-11}] command queue
      [n1,s1,r7/1:/Table/{SystemCon…-11}] waiting for read lock
      [n1,s1,r7/1:/Table/{SystemCon…-11}] read completed
       === SPAN START: txn coordinator send ===
      [n1,client=127.0.0.1:44808,user=root] client.Txn did AutoCommit. err: <nil>

      [n1,client=127.0.0.1:44808,user=root] added table 'data.public."secondary index - list partitioning - DEFAULT"' to table collection
      [n1,client=127.0.0.1:44808,user=root] optimizer plan succeeded
      [n1,client=127.0.0.1:44808,user=root] planning ends
      [n1,client=127.0.0.1:44808,user=root] checking distributability
      [n1,client=127.0.0.1:44808,user=root] distributable plan: true
      [n1,client=127.0.0.1:44808,user=root] execution starts: distributed
       === SPAN START: consuming rows ===
      [n1,client=127.0.0.1:44808,user=root] creating DistSQL plan with isLocal=false
      [n1,client=127.0.0.1:44808,user=root] querying next range at /Table/104/2/4
      [n1,client=127.0.0.1:44808,user=root] running DistSQL plan
       === SPAN START: /cockroach.sql.distsqlrun.DistSQL/SetupFlow ===
       === SPAN START: outbox ===
    cockroach.stat.outbox.bytes_sent: ߊ
    cockroach.streamid: 1
       === SPAN START: table reader ===
    cockroach.stat.tablereader.stalltime: 2µs
    cockroach.processorid: 0
    cockroach.stat.tablereader.input.rows: 0
      [n2] Scan /Table/104/2/{4-5}
       === SPAN START: txn coordinator send ===
       === SPAN START: [async] drain ===
       === SPAN START: dist sender send ===
      [txn=6b13153b,n2] querying next range at /Table/104/2/4
      [txn=6b13153b,n2] r245: sending batch 1 Scan to (n2,s2):2
      [txn=6b13153b,n2] sending request to local client
       === SPAN START: /cockroach.roachpb.Internal/Batch ===
      [n2] 1 Scan
      [n2,s2] executing 1 requests
      [n2,s2,r245/2:/Table/104/2/{4-5}] read-only path
      [n2,s2,r245/2:/Table/104/2/{4-5}] command queue
      [n2,s2,r245/2:/Table/104/2/{4-5}] waiting for read lock
      [n2,s2,r245/2:/Table/104/2/{4-5}] read completed
       === SPAN START: ordered aggregator ===
    cockroach.processorid: 1
    cockroach.stat.aggregator.input.rows: 0
    cockroach.stat.aggregator.mem.max: 10 KiB
    cockroach.stat.aggregator.stalltime: 367µs
      [n2] accumulation complete
      [n2] Consumer sent handshake. Consuming flow scheduled: false
       === SPAN START: flow ===
      [n1,client=127.0.0.1:44808,user=root] starting (1 processors, 0 startables)
       === SPAN START: noop ===
    cockroach.processorid: 2
      [n1,client=127.0.0.1:44808,user=root] execution ends
      [n1,client=127.0.0.1:44808,user=root] rows affected: 1
       === SPAN START: txn coordinator send ===
      [n1,client=127.0.0.1:44808,user=root] AutoCommit. err: <nil>
      [n1,client=127.0.0.1:44808,user=root] releasing 1 tables
      [n1,client=127.0.0.1:44808,user=root] [NoTxn pos:7090] executing ExecStmt: SET TRACING = off
      [n1,client=127.0.0.1:44808,user=root] executing: SET TRACING = off in state: NoTxn
    goroutine 89502 [running]:
    runtime/debug.Stack(0xa7a358200, 0xc421cddbf0, 0x3db5620)
        /usr/local/go/src/runtime/debug/stack.go:24 +0xb5
    github.com/cockroachdb/cockroach/pkg/testutils.SucceedsSoon(0x3e16d40, 0xc42378b770, 0xc421cddbc0)
        /go/src/github.com/cockroachdb/cockroach/pkg/testutils/soon.go:50 +0x172
    github.com/cockroachdb/cockroach/pkg/ccl/partitionccl.TestRepartitioning.func1(0xc42378b770)
        /go/src/github.com/cockroachdb/cockroach/pkg/ccl/partitionccl/partition_test.go:1331 +0x1280
    testing.tRunner(0xc42378b770, 0xc423574ea0)
        /usr/local/go/src/testing/testing.go:777 +0x16e
    created by testing.(*T).Run
        /usr/local/go/src/testing/testing.go:824 +0x565
------- Stdout: -------
I180820 17:21:40.683567 45788 sql/lease.go:345  [n1,client=127.0.0.1:44808,user=root] publish: descID=102 (secondary index - unpartitioned) version=5 mtime=2018-08-20 17:21:40.605558021 +0000 UTC
I180820 17:21:41.039356 45788 sql/event_log.go:126  [n1,client=127.0.0.1:44808,user=root] Event: "drop_database", target: 101, info: {DatabaseName:data Statement:DROP DATABASE IF EXISTS data User:root DroppedSchemaObjects:[data.public."secondary index - unpartitioned"]}
I180820 17:21:41.293942 89482 storage/replica_raftstorage.go:538  [replicate,n2,s2,r237/2:/Table/102/2/{3-4}] generated preemptive snapshot cbbee5be at index 26
I180820 17:21:41.305921 89482 storage/store_snapshot.go:655  [replicate,n2,s2,r237/2:/Table/102/2/{3-4}] streamed snapshot to (n1,s1):?: kv pairs: 10, log entries: 16, rate-limit: 2.0 MiB/sec, 11ms
I180820 17:21:41.310247 89517 storage/replica_raftstorage.go:782  [n1,s1,r237/?:{-}] applying preemptive snapshot at index 26 (id=cbbee5be, encoded size=4620, 1 rocksdb batches, 16 log entries)
I180820 17:21:41.326000 89517 storage/replica_raftstorage.go:788  [n1,s1,r237/?:/Table/102/2/{3-4}] applied preemptive snapshot in 15ms [clear=0ms batch=0ms entries=4ms commit=10ms]
I180820 17:21:41.337905 89482 storage/replica_command.go:792  [replicate,n2,s2,r237/2:/Table/102/2/{3-4}] change replicas (ADD_REPLICA (n1,s1):3): read existing descriptor r237:/Table/102/2/{3-4} [(n2,s2):2, next=3, gen=842353071208]
I180820 17:21:41.471779 89484 storage/replica_command.go:792  [replicate,n1,s1,r238/2:/Table/102/2/{4-5}] change replicas (REMOVE_REPLICA (n3,s3):1): read existing descriptor r238:/Table/102/2/{4-5} [(n3,s3):1, (n1,s1):2, next=3, gen=842379517256]
I180820 17:21:41.546842 45788 sql/lease.go:345  [n1,client=127.0.0.1:44808,user=root,scExec=] publish: descID=102 (secondary index - unpartitioned) version=6 mtime=2018-08-20 17:21:41.544636341 +0000 UTC
I180820 17:21:41.711284 89482 storage/replica.go:3749  [n2,s2,r237/2:/Table/102/2/{3-4}] proposing ADD_REPLICA((n1,s1):3): updated=[(n2,s2):2 (n1,s1):3] next=4
I180820 17:21:41.802804 89484 storage/replica.go:3749  [n1,s1,r238/2:/Table/102/2/{4-5}] proposing REMOVE_REPLICA((n3,s3):1): updated=[(n1,s1):2] next=3
I180820 17:21:41.849956 89533 storage/replica_range_lease.go:554  [replicate,n2,s2,r237/2:/Table/102/2/{3-4}] transferring lease to s1
I180820 17:21:41.872807 45868 storage/store.go:3717  [n3,s3,r238/1:/Table/102/2/{4-5}] added to replica GC queue (peer suggestion)
I180820 17:21:41.875742 89533 storage/replica_range_lease.go:617  [replicate,n2,s2,r237/2:/Table/102/2/{3-4}] done transferring lease to s1: <nil>
I180820 17:21:41.885009 44537 storage/replica_proposal.go:214  [n1,s1,r237/3:/Table/102/2/{3-4}] new range lease repl=(n1,s1):3 seq=11 start=1534785701.850132378,0 epo=1 pro=1534785701.850164426,0 following repl=(n2,s2):2 seq=10 start=1534785696.657908881,0 epo=1 pro=1534785696.657942863,0
I180820 17:21:42.030647 89608 storage/store.go:2612  [replicaGC,n3,s3,r238/1:/Table/102/2/{4-5}] removing replica
I180820 17:21:42.034164 89608 storage/replica.go:880  [replicaGC,n3,s3,r238/1:/Table/102/2/{4-5}] removed 7 (0+7) keys in 1ms [clear=1ms commit=0ms]
I180820 17:21:42.183022 45788 sql/event_log.go:126  [n1,client=127.0.0.1:44808,user=root] Event: "create_database", target: 103, info: {DatabaseName:data Statement:CREATE DATABASE data User:root}
I180820 17:21:42.236871 89626 storage/replica_command.go:282  [split,n3,s3,r240/1:/{Table/102/3-Max}] initiating a split of this range at key /Table/103 [r241]
I180820 17:21:42.440709 45788 sql/event_log.go:126  [n1,client=127.0.0.1:44808,user=root] Event: "create_table", target: 104, info: {TableName:data.public."secondary index - list partitioning" Statement:CREATE TABLE "secondary index - list partitioning" (a INT PRIMARY KEY, b INT, INDEX b_idx (b) PARTITION BY LIST (b) (PARTITION p3 VALUES IN (3), PARTITION p4 VALUES IN (4))) User:root}
I180820 17:21:42.736868 45635 storage/replica_proposal.go:214  [n3,s3,r240/1:/{Table/102/3-Max}] new range lease repl=(n3,s3):1 seq=9 start=1534785557.142597904,0 epo=1 pro=1534785557.142627907,0 following repl=(n3,s3):1 seq=9 start=1534785557.142597904,0 epo=1 pro=1534785557.142627907,0
I180820 17:21:42.768898 89675 storage/replica_command.go:282  [split,n3,s3,r241/1:/{Table/103-Max}] initiating a split of this range at key /Table/104 [r242]
I180820 17:21:42.929492 45788 sql/event_log.go:126  [n1,client=127.0.0.1:44808,user=root] Event: "set_zone_config", target: 104, info: {Target:data."secondary index - list partitioning"@b_idx Config:constraints: [+n1] User:root}
I180820 17:21:43.098543 45650 storage/replica_proposal.go:214  [n3,s3,r241/1:/{Table/103-Max}] new range lease repl=(n3,s3):1 seq=9 start=1534785557.142597904,0 epo=1 pro=1534785557.142627907,0 following repl=(n3,s3):1 seq=9 start=1534785557.142597904,0 epo=1 pro=1534785557.142627907,0
I180820 17:21:43.163469 89704 storage/replica_command.go:282  [split,n3,s3,r242/1:/{Table/104-Max}] initiating a split of this range at key /Table/104/2 [r243]
I180820 17:21:43.303747 45788 sql/event_log.go:126  [n1,client=127.0.0.1:44808,user=root] Event: "set_zone_config", target: 104, info: {Target:data."secondary index - list partitioning".p3 Config:constraints: [+n2] User:root}
I180820 17:21:43.496444 45595 storage/replica_proposal.go:214  [n3,s3,r242/1:/{Table/104-Max}] new range lease repl=(n3,s3):1 seq=9 start=1534785557.142597904,0 epo=1 pro=1534785557.142627907,0 following repl=(n3,s3):1 seq=9 start=1534785557.142597904,0 epo=1 pro=1534785557.142627907,0
I180820 17:21:43.517569 45788 sql/event_log.go:126  [n1,client=127.0.0.1:44808,user=root] Event: "set_zone_config", target: 104, info: {Target:data."secondary index - list partitioning".p4 Config:constraints: [+n3] User:root}
I180820 17:21:43.523612 89502 ccl/partitionccl/partition_test.go:200  query: SELECT count(*) FROM "secondary index - list partitioning" WHERE b = 4
I180820 17:21:43.580985 89713 storage/replica_command.go:282  [split,n3,s3,r243/1:/{Table/104/2-Max}] initiating a split of this range at key /Table/104/2/3 [r244]
I180820 17:21:43.890250 89502 ccl/partitionccl/partition_test.go:200  query: SELECT count(*) FROM "secondary index - list partitioning" WHERE b < 3
I180820 17:21:43.918493 45269 server/status/runtime.go:433  [n2] runtime stats: 1.1 GiB RSS, 637 goroutines, 40 MiB/31 MiB/98 MiB GO alloc/idle/total, 134 MiB/182 MiB CGO alloc/total, 2510.27cgo/sec, 1.64/0.08 %(u/s)time, 0.02 %gc (10x)
I180820 17:21:43.934447 45627 storage/replica_proposal.go:214  [n3,s3,r243/1:/{Table/104/2-Max}] new range lease repl=(n3,s3):1 seq=9 start=1534785557.142597904,0 epo=1 pro=1534785557.142627907,0 following repl=(n3,s3):1 seq=9 start=1534785557.142597904,0 epo=1 pro=1534785557.142627907,0
I180820 17:21:43.962110 89814 storage/replica_command.go:282  [split,n3,s3,r244/1:/{Table/104/2/3-Max}] initiating a split of this range at key /Table/104/2/4 [r245]
I180820 17:21:43.981170 89850 storage/replica_raftstorage.go:538  [replicate,n3,s3,r243/1:/Table/104/2{-/3}] generated preemptive snapshot 3de3d1e6 at index 15
I180820 17:21:44.001188 89850 storage/store_snapshot.go:655  [replicate,n3,s3,r243/1:/Table/104/2{-/3}] streamed snapshot to (n1,s1):?: kv pairs: 8, log entries: 5, rate-limit: 2.0 MiB/sec, 19ms
I180820 17:21:44.005568 89818 storage/replica_raftstorage.go:782  [n1,s1,r243/?:{-}] applying preemptive snapshot at index 15 (id=3de3d1e6, encoded size=2255, 1 rocksdb batches, 5 log entries)
I180820 17:21:44.014233 89818 storage/replica_raftstorage.go:788  [n1,s1,r243/?:/Table/104/2{-/3}] applied preemptive snapshot in 8ms [clear=0ms batch=0ms entries=7ms commit=0ms]
I180820 17:21:44.048504 89850 storage/replica_command.go:792  [replicate,n3,s3,r243/1:/Table/104/2{-/3}] change replicas (ADD_REPLICA (n1,s1):2): read existing descriptor r243:/Table/104/2{-/3} [(n3,s3):1, next=2, gen=842417035624]
I180820 17:21:44.070553 89502 ccl/partitionccl/partition_test.go:200  query: SELECT count(*) FROM "secondary index - list partitioning" WHERE b < 3
I180820 17:21:44.131838 89907 storage/replica_command.go:792  [replicate,n1,s1,r237/3:/Table/102/2/{3-4}] change replicas (REMOVE_REPLICA (n2,s2):2): read existing descriptor r237:/Table/102/2/{3-4} [(n2,s2):2, (n1,s1):3, next=4, gen=842354106504]
I180820 17:21:44.148712 89502 ccl/partitionccl/partition_test.go:200  query: SELECT count(*) FROM "secondary index - list partitioning" WHERE b = 4
I180820 17:21:44.250444 89907 storage/replica.go:3749  [n1,s1,r237/3:/Table/102/2/{3-4}] proposing REMOVE_REPLICA((n2,s2):2): updated=[(n1,s1):3] next=4
I180820 17:21:44.273652 45311 storage/store.go:3717  [n2,s2,r237/2:/Table/102/2/{3-4}] added to replica GC queue (peer suggestion)
I180820 17:21:44.294841 89502 ccl/partitionccl/partition_test.go:200  query: SELECT count(*) FROM "secondary index - list partitioning" WHERE b > 4
I180820 17:21:44.309388 89850 storage/replica.go:3749  [n3,s3,r243/1:/Table/104/2{-/3}] proposing ADD_REPLICA((n1,s1):2): updated=[(n3,s3):1 (n1,s1):2] next=3
I180820 17:21:44.314062 45271 server/status/recorder.go:652  [n2,summaries] available memory from cgroups (8.0 EiB) exceeds system memory 16 GiB, using system memory
I180820 17:21:44.315832 45626 storage/replica_proposal.go:214  [n3,s3,r244/1:/{Table/104/2/3-Max}] new range lease repl=(n3,s3):1 seq=9 start=1534785557.142597904,0 epo=1 pro=1534785557.142627907,0 following repl=(n3,s3):1 seq=9 start=1534785557.142597904,0 epo=1 pro=1534785557.142627907,0
I180820 17:21:44.380679 89916 storage/replica_command.go:282  [split,n3,s3,r245/1:/{Table/104/2/4-Max}] initiating a split of this range at key /Table/104/2/5 [r246]
I180820 17:21:44.383478 45311 storage/store.go:3717  [n2,s2,r237/2:/Table/102/2/{3-4}] added to replica GC queue (peer suggestion)
I180820 17:21:44.514424 89971 storage/store.go:2612  [replicaGC,n2,s2,r237/2:/Table/102/2/{3-4}] removing replica
I180820 17:21:44.518510 89971 storage/replica.go:880  [replicaGC,n2,s2,r237/2:/Table/102/2/{3-4}] removed 7 (0+7) keys in 1ms [clear=1ms commit=0ms]
I180820 17:21:44.581669 89502 ccl/partitionccl/partition_test.go:200  query: SELECT count(*) FROM "secondary index - list partitioning" WHERE b < 3
I180820 17:21:44.617658 89965 storage/replica_range_lease.go:554  [replicate,n3,s3,r243/1:/Table/104/2{-/3}] transferring lease to s1
I180820 17:21:44.633551 89965 storage/replica_range_lease.go:617  [replicate,n3,s3,r243/1:/Table/104/2{-/3}] done transferring lease to s1: <nil>
I180820 17:21:44.647821 44573 storage/replica_proposal.go:214  [n1,s1,r243/2:/Table/104/2{-/3}] new range lease repl=(n1,s1):2 seq=10 start=1534785704.617830692,0 epo=1 pro=1534785704.617869918,0 following repl=(n3,s3):1 seq=9 start=1534785557.142597904,0 epo=1 pro=1534785557.142627907,0
I180820 17:21:44.665605 89993 storage/replica_raftstorage.go:538  [replicate,n3,s3,r244/1:/Table/104/2/{3-4}] generated preemptive snapshot 7feb4486 at index 16
I180820 17:21:44.683634 89993 storage/store_snapshot.go:655  [replicate,n3,s3,r244/1:/Table/104/2/{3-4}] streamed snapshot to (n2,s2):?: kv pairs: 7, log entries: 6, rate-limit: 2.0 MiB/sec, 6ms
I180820 17:21:44.687519 90038 storage/replica_raftstorage.go:782  [n2,s2,r244/?:{-}] applying preemptive snapshot at index 16 (id=7feb4486, encoded size=1993, 1 rocksdb batches, 6 log entries)
I180820 17:21:44.690182 90038 storage/replica_raftstorage.go:788  [n2,s2,r244/?:/Table/104/2/{3-4}] applied preemptive snapshot in 2ms [clear=0ms batch=0ms entries=1ms commit=0ms]
I180820 17:21:44.696469 89993 storage/replica_command.go:792  [replicate,n3,s3,r244/1:/Table/104/2/{3-4}] change replicas (ADD_REPLICA (n2,s2):2): read existing descriptor r244:/Table/104/2/{3-4} [(n3,s3):1, next=2, gen=842400064800]
I180820 17:21:44.885935 45596 storage/replica_proposal.go:214  [n3,s3,r245/1:/{Table/104/2/4-Max}] new range lease repl=(n3,s3):1 seq=9 start=1534785557.142597904,0 epo=1 pro=1534785557.142627907,0 following repl=(n3,s3):1 seq=9 start=1534785557.142597904,0 epo=1 pro=1534785557.142627907,0
I180820 17:21:44.903614 90045 storage/replica_command.go:282  [split,n3,s3,r246/1:/{Table/104/2/5-Max}] initiating a split of this range at key /Table/104/3 [r247]
I180820 17:21:44.974549 89502 ccl/partitionccl/partition_test.go:200  query: SELECT count(*) FROM "secondary index - list partitioning" WHERE b < 3
I180820 17:21:45.138824 89502 ccl/partitionccl/partition_test.go:200  query: SELECT count(*) FROM "secondary index - list partitioning" WHERE b = 3
I180820 17:21:45.157898 89993 storage/replica.go:3749  [n3,s3,r244/1:/Table/104/2/{3-4}] proposing ADD_REPLICA((n2,s2):2): updated=[(n3,s3):1 (n2,s2):2] next=3
I180820 17:21:45.206752 45546 server/status/runtime.go:433  [n3] runtime stats: 1.1 GiB RSS, 642 goroutines, 54 MiB/19 MiB/98 MiB GO alloc/idle/total, 134 MiB/182 MiB CGO alloc/total, 2600.25cgo/sec, 1.64/0.08 %(u/s)time, 0.02 %gc (9x)
I180820 17:21:45.324071 89502 ccl/partitionccl/partition_test.go:200  query: SELECT count(*) FROM "secondary index - list partitioning" WHERE b > 4
I180820 17:21:45.334719 45597 storage/replica_proposal.go:214  [n3,s3,r246/1:/{Table/104/2/5-Max}] new range lease repl=(n3,s3):1 seq=9 start=1534785557.142597904,0 epo=1 pro=1534785557.142627907,0 following repl=(n3,s3):1 seq=9 start=1534785557.142597904,0 epo=1 pro=1534785557.142627907,0
I180820 17:21:45.368805 90074 storage/replica_range_lease.go:554  [replicate,n3,s3,r244/1:/Table/104/2/{3-4}] transferring lease to s2
I180820 17:21:45.436421 90074 storage/replica_range_lease.go:617  [replicate,n3,s3,r244/1:/Table/104/2/{3-4}] done transferring lease to s2: <nil>
I180820 17:21:45.440062 45148 storage/replica_proposal.go:214  [n2,s2,r244/2:/Table/104/2/{3-4}] new range lease repl=(n2,s2):2 seq=10 start=1534785705.368984946,0 epo=1 pro=1534785705.369011345,0 following repl=(n3,s3):1 seq=9 start=1534785557.142597904,0 epo=1 pro=1534785557.142627907,0
I180820 17:21:45.498451 90079 storage/replica_raftstorage.go:538  [replicate,n3,s3,r246/1:/Table/104/{2/5-3}] generated preemptive snapshot 586bfc4a at index 15
I180820 17:21:45.527081 45548 server/status/recorder.go:652  [n3,summaries] available memory from cgroups (8.0 EiB) exceeds system memory 16 GiB, using system memory
I180820 17:21:45.554997 90079 storage/store_snapshot.go:655  [replicate,n3,s3,r246/1:/Table/104/{2/5-3}] streamed snapshot to (n1,s1):?: kv pairs: 8, log entries: 5, rate-limit: 2.0 MiB/sec, 14ms
I180820 17:21:45.559507 90195 storage/replica_raftstorage.go:782  [n1,s1,r246/?:{-}] applying preemptive snapshot at index 15 (id=586bfc4a, encoded size=2262, 1 rocksdb batches, 5 log entries)
I180820 17:21:45.562054 90195 storage/replica_raftstorage.go:788  [n1,s1,r246/?:/Table/104/{2/5-3}] applied preemptive snapshot in 2ms [clear=0ms batch=0ms entries=1ms commit=0ms]
I180820 17:21:45.565659 89502 ccl/partitionccl/partition_test.go:200  query: SELECT count(*) FROM "secondary index - list partitioning" WHERE b < 3
I180820 17:21:45.581826 90079 storage/replica_command.go:792  [replicate,n3,s3,r246/1:/Table/104/{2/5-3}] change replicas (ADD_REPLICA (n1,s1):2): read existing descriptor r246:/Table/104/{2/5-3} [(n3,s3):1, next=2, gen=842413325592]
I180820 17:21:45.650376 89502 ccl/partitionccl/partition_test.go:200  query: SELECT count(*) FROM "secondary index - list partitioning" WHERE b = 3
I180820 17:21:45.751149 90079 storage/replica.go:3749  [n3,s3,r246/1:/Table/104/{2/5-3}] proposing ADD_REPLICA((n1,s1):2): updated=[(n3,s3):1 (n1,s1):2] next=3
I180820 17:21:45.776127 89502 ccl/partitionccl/partition_test.go:200  query: SELECT count(*) FROM "secondary index - list partitioning" WHERE b = 4
I180820 17:21:45.875159 89502 ccl/partitionccl/partition_test.go:200  query: SELECT count(*) FROM "secondary index - list partitioning" WHERE b > 4
I180820 17:21:45.898836 90263 storage/replica_range_lease.go:554  [replicate,n3,s3,r246/1:/Table/104/{2/5-3}] transferring lease to s1
I180820 17:21:45.935371 90259 storage/replica_command.go:792  [replicate,n2,s2,r244/2:/Table/104/2/{3-4}] change replicas (REMOVE_REPLICA (n3,s3):1): read existing descriptor r244:/Table/104/2/{3-4} [(n3,s3):1, (n2,s2):2, next=3, gen=842355436360]
I180820 17:21:45.943520 90263 storage/replica_range_lease.go:617  [replicate,n3,s3,r246/1:/Table/104/{2/5-3}] done transferring lease to s1: <nil>
I180820 17:21:45.944507 44554 storage/replica_proposal.go:214  [n1,s1,r246/2:/Table/104/{2/5-3}] new range lease repl=(n1,s1):2 seq=10 start=1534785705.899231405,0 epo=1 pro=1534785705.899270936,0 following repl=(n3,s3):1 seq=9 start=1534785557.142597904,0 epo=1 pro=1534785557.142627907,0
I180820 17:21:46.128936 90259 storage/replica.go:3749  [n2,s2,r244/2:/Table/104/2/{3-4}] proposing REMOVE_REPLICA((n3,s3):1): updated=[(n2,s2):2] next=3
I180820 17:21:46.152711 45788 sql/lease.go:345  [n1,client=127.0.0.1:44808,user=root] publish: descID=104 (secondary index - list partitioning - DEFAULT) version=2 mtime=2018-08-20 17:21:46.139651616 +0000 UTC
I180820 17:21:46.181060 50820 storage/store.go:3717  [n3,s3,r244/1:/Table/104/2/{3-4}] added to replica GC queue (peer suggestion)
I180820 17:21:46.190397 90311 storage/store.go:2612  [replicaGC,n3,s3,r244/1:/Table/104/2/{3-4}] removing replica
I180820 17:21:46.194151 90311 storage/replica.go:880  [replicaGC,n3,s3,r244/1:/Table/104/2/{3-4}] removed 8 (0+8) keys in 1ms [clear=1ms commit=0ms]
I180820 17:21:46.292653 90098 storage/replica_command.go:792  [replicate,n1,s1,r243/2:/Table/104/2{-/3}] change replicas (REMOVE_REPLICA (n3,s3):1): read existing descriptor r243:/Table/104/2{-/3} [(n3,s3):1, (n1,s1):2, next=3, gen=842371061728]
I180820 17:21:46.597899 90098 storage/replica.go:3749  [n1,s1,r243/2:/Table/104/2{-/3}] proposing REMOVE_REPLICA((n3,s3):1): updated=[(n1,s1):2] next=3
I180820 17:21:46.696722 90296 sql/lease.go:877  new lease: 104("secondary index - list partitioning - DEFAULT") ver=2:1534786075.407683538,0, refcount=0
I180820 17:21:46.714115 45868 storage/store.go:3717  [n3,s3,r243/1:/Table/104/2{-/3}] added to replica GC queue (peer suggestion)
I180820 17:21:46.725719 45788 sql/lease.go:315  publish (1 leases): desc=[{secondary index - list partitioning - DEFAULT 104 1}]
I180820 17:21:46.728380 90301 storage/store.go:2612  [replicaGC,n3,s3,r243/1:/Table/104/2{-/3}] removing replica
I180820 17:21:46.731854 90301 storage/replica.go:880  [replicaGC,n3,s3,r243/1:/Table/104/2{-/3}] removed 7 (0+7) keys in 1ms [clear=0ms commit=0ms]
I180820 17:21:46.954155 45788 sql/lease.go:315  publish (1 leases): desc=[{secondary index - list partitioning - DEFAULT 104 1}]
I180820 17:21:47.225091 45788 sql/lease.go:345  [n1,client=127.0.0.1:44808,user=root,scExec=] publish: descID=104 (secondary index - list partitioning - DEFAULT) version=3 mtime=2018-08-20 17:21:47.223360226 +0000 UTC
I180820 17:21:47.313951 90394 storage/replica_command.go:792  [replicate,n1,s1,r246/2:/Table/104/{2/5-3}] change replicas (REMOVE_REPLICA (n3,s3):1): read existing descriptor r246:/Table/104/{2/5-3} [(n3,s3):1, (n1,s1):2, next=3, gen=842427275856]
I180820 17:21:47.419630 90395 sql/lease.go:877  new lease: 104("secondary index - list partitioning - DEFAULT") ver=3:1534785993.474064670,0, refcount=0
I180820 17:21:47.463118 90394 storage/replica.go:3749  [n1,s1,r246/2:/Table/104/{2/5-3}] proposing REMOVE_REPLICA((n3,s3):1): updated=[(n1,s1):2] next=3
I180820 17:21:47.599971 45868 storage/store.go:3717  [n3,s3,r246/1:/Table/104/{2/5-3}] added to replica GC queue (peer suggestion)
I180820 17:21:47.622694 45788 sql/lease.go:315  publish (1 leases): desc=[{secondary index - list partitioning - DEFAULT 104 2}]
I180820 17:21:47.627316 90456 storage/store.go:2612  [replicaGC,n3,s3,r246/1:/Table/104/{2/5-3}] removing replica
I180820 17:21:47.645349 90456 storage/replica.go:880  [replicaGC,n3,s3,r246/1:/Table/104/{2/5-3}] removed 7 (0+7) keys in 1ms [clear=1ms commit=0ms]
I180820 17:21:47.926577 45788 sql/lease.go:345  [n1,client=127.0.0.1:44808,user=root] publish: descID=104 (secondary index - list partitioning - DEFAULT) version=4 mtime=2018-08-20 17:21:47.814074387 +0000 UTC
I180820 17:21:48.231850 45788 sql/event_log.go:126  [n1,client=127.0.0.1:44808,user=root] Event: "alter_index", target: 104, info: {TableName:data.public."secondary index - list partitioning - DEFAULT" IndexName:b_idx Statement:ALTER INDEX "secondary index - list partitioning - DEFAULT"@b_idx PARTITION BY LIST (b) (PARTITION p4 VALUES IN ((4)), PARTITION p5 VALUES IN ((5)), PARTITION pd VALUES IN ((DEFAULT))) User:root MutationID:0}
I180820 17:21:48.435511 90369 sql/lease.go:877  new lease: 104("secondary index - list partitioning - DEFAULT") ver=4:1534786013.644813507,0, refcount=0
I180820 17:21:48.485382 45788 sql/lease.go:315  publish (1 leases): desc=[{secondary index - list partitioning - DEFAULT 104 3}]
I180820 17:21:49.215251 90550 storage/replica_raftstorage.go:538  [replicate,n2,s2,r244/2:/Table/104/2/{3-4}] generated preemptive snapshot 5c7e6865 at index 27
I180820 17:21:49.233459 90550 storage/store_snapshot.go:655  [replicate,n2,s2,r244/2:/Table/104/2/{3-4}] streamed snapshot to (n1,s1):?: kv pairs: 11, log entries: 17, rate-limit: 2.0 MiB/sec, 11ms
I180820 17:21:49.240050 90480 storage/replica_raftstorage.go:782  [n1,s1,r244/?:{-}] applying preemptive snapshot at index 27 (id=5c7e6865, encoded size=4957, 1 rocksdb batches, 17 log entries)
I180820 17:21:49.248606 90480 storage/replica_raftstorage.go:788  [n1,s1,r244/?:/Table/104

Please assign, take a look and update the issue accordingly.

a-robinson commented 6 years ago

Dupe of https://github.com/cockroachdb/cockroach/issues/28786