cockroachdb / cockroach

CockroachDB — the cloud native, distributed SQL database designed for high availability, effortless scale, and control over data placement.
https://www.cockroachlabs.com
Other
30.01k stars 3.79k forks source link

teamcity: failed tests on master: testrace/TestRepartitioning, test/TestRepartitioning #28780

Closed cockroach-teamcity closed 6 years ago

cockroach-teamcity commented 6 years ago

The following tests appear to have failed:

#845270:

--- FAIL: testrace/TestRepartitioning (386.400s)

------- Stdout: -------
W180817 22:55:00.198962 42207 server/status/runtime.go:294  [n?] Could not parse build timestamp: parsing time "" as "2006/01/02 15:04:05": cannot parse "" as "2006"
I180817 22:55:00.338190 42207 server/server.go:829  [n?] monitoring forward clock jumps based on server.clock.forward_jump_check_enabled
I180817 22:55:00.338946 42207 base/addr_validation.go:260  [n?] server certificate addresses: IP=127.0.0.1,::1; DNS=localhost,*.local; CN=node
I180817 22:55:00.339119 42207 base/addr_validation.go:300  [n?] web UI certificate addresses: IP=127.0.0.1,::1; DNS=localhost,*.local; CN=node
I180817 22:55:00.379432 42207 server/config.go:496  [n?] 1 storage engine initialized
I180817 22:55:00.380998 42207 server/config.go:499  [n?] RocksDB cache size: 128 MiB
I180817 22:55:00.381116 42207 server/config.go:499  [n?] store 0: in-memory, size 0 B
I180817 22:55:00.506144 42207 util/stop/stopper.go:537  [n?] quiescing; tasks left:
1      [async] storage.pendingLeaseRequest: requesting lease
I180817 22:55:00.518524 42207 server/node.go:373  [n?] **** cluster 19cdccbb-629c-4f62-a24d-3c300d09ad1f has been created
I180817 22:55:00.518703 42207 server/server.go:1400  [n?] **** add additional nodes by specifying --join=127.0.0.1:33715
I180817 22:55:00.519982 42207 gossip/gossip.go:383  [n1] NodeDescriptor set to node_id:1 address:<network_field:"tcp" address_field:"127.0.0.1:33715" > attrs:<> locality:<> ServerVersion:<major_val:2 minor_val:0 patch:0 unstable:11 > build_tag:"v2.1.0-alpha.20180702-1740-g9f5ea5b" started_at:1534546500519538998 
I180817 22:55:00.584560 42207 storage/store.go:1509  [n1,s1] [n1,s1]: failed initial metrics computation: [n1,s1]: system config not yet available
I180817 22:55:00.586228 42207 server/node.go:476  [n1] initialized store [n1,s1]: disk (capacity=512 MiB, available=512 MiB, used=0 B, logicalBytes=6.9 KiB), ranges=1, leases=1, queries=0.00, writes=0.00, bytesPerReplica={p10=7103.00 p25=7103.00 p50=7103.00 p75=7103.00 p90=7103.00 pMax=7103.00}, writesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}
I180817 22:55:00.586893 42207 storage/stores.go:242  [n1] read 0 node addresses from persistent storage
I180817 22:55:00.587613 42207 server/node.go:697  [n1] connecting to gossip network to verify cluster ID...
I180817 22:55:00.592237 42207 server/node.go:722  [n1] node connected via gossip and verified as part of cluster "19cdccbb-629c-4f62-a24d-3c300d09ad1f"
I180817 22:55:00.596489 42207 server/node.go:546  [n1] node=1: started with [n1=<in-mem>] engine(s) and attributes []
I180817 22:55:00.609489 42365 storage/replica_command.go:282  [split,n1,s1,r1/1:/M{in-ax}] initiating a split of this range at key /System/"" [r2]
I180817 22:55:00.615277 42207 server/status/recorder.go:652  [n1] available memory from cgroups (8.0 EiB) exceeds system memory 16 GiB, using system memory
I180817 22:55:00.616156 42207 server/server.go:1806  [n1] Could not start heap profiler worker due to: directory to store profiles could not be determined
I180817 22:55:00.624879 42207 server/server.go:1537  [n1] starting https server at 127.0.0.1:42595 (use: 127.0.0.1:42595)
I180817 22:55:00.625085 42207 server/server.go:1539  [n1] starting grpc/postgres server at 127.0.0.1:33715
I180817 22:55:00.625246 42207 server/server.go:1540  [n1] advertising CockroachDB node at 127.0.0.1:33715
W180817 22:55:00.625645 42207 jobs/registry.go:308  [n1] unable to get node liveness: node not in the liveness table
I180817 22:55:00.630490 42338 server/status/recorder.go:652  [n1,summaries] available memory from cgroups (8.0 EiB) exceeds system memory 16 GiB, using system memory
I180817 22:55:00.857937 42633 storage/replica_command.go:282  [split,n1,s1,r2/1:/{System/-Max}] initiating a split of this range at key /System/NodeLiveness [r3]
W180817 22:55:00.903418 42581 storage/intent_resolver.go:668  [n1,s1] failed to push during intent resolution: failed to push "sql txn" id=90137e12 key=/Table/SystemConfigSpan/Start rw=true pri=0.00642994 iso=SERIALIZABLE stat=PENDING epo=0 ts=1534546500.699972957,0 orig=1534546500.699972957,0 max=1534546500.699972957,0 wto=false rop=false seq=6
I180817 22:55:01.044311 42612 rpc/nodedialer/nodedialer.go:92  [consistencyChecker,n1,s1,r1/1:/{Min-System/}] connection to n1 established
I180817 22:55:01.078178 42616 sql/event_log.go:126  [n1,intExec=optInToDiagnosticsStatReporting] Event: "set_cluster_setting", target: 0, info: {SettingName:diagnostics.reporting.enabled Value:true User:root}
I180817 22:55:01.423020 42306 storage/replica_consistency.go:129  [consistencyChecker,n1,s1,r2/1:/{System/-Max}] triggering stats recomputation to resolve delta of {ContainsEstimates:true LastUpdateNanos:1534546501086513897 IntentAge:0 GCBytesAge:0 LiveBytes:-22043 LiveCount:-461 KeyBytes:-21468 KeyCount:-461 ValBytes:-575 ValCount:-461 IntentBytes:0 IntentCount:0 SysBytes:0 SysCount:0}
I180817 22:55:01.428703 42661 storage/replica_command.go:282  [split,n1,s1,r3/1:/{System/NodeL…-Max}] initiating a split of this range at key /System/NodeLivenessMax [r4]
I180817 22:55:01.585140 42663 storage/replica_command.go:282  [split,n1,s1,r4/1:/{System/NodeL…-Max}] initiating a split of this range at key /System/tsd [r5]
I180817 22:55:01.830319 42642 storage/replica_command.go:282  [split,n1,s1,r5/1:/{System/tsd-Max}] initiating a split of this range at key /System/"tse" [r6]
I180817 22:55:02.081985 42675 sql/event_log.go:126  [n1,intExec=set-setting] Event: "set_cluster_setting", target: 0, info: {SettingName:version Value:$1 User:root}
I180817 22:55:02.137257 42668 storage/replica_command.go:282  [split,n1,s1,r6/1:/{System/tse-Max}] initiating a split of this range at key /Table/SystemConfigSpan/Start [r7]
I180817 22:55:02.436157 42697 storage/replica_command.go:282  [split,n1,s1,r7/1:/{Table/System…-Max}] initiating a split of this range at key /Table/11 [r8]
I180817 22:55:02.474810 42729 sql/event_log.go:126  [n1,intExec=disableNetTrace] Event: "set_cluster_setting", target: 0, info: {SettingName:trace.debug.enable Value:false User:root}
I180817 22:55:02.632994 42755 storage/replica_command.go:282  [split,n1,s1,r8/1:/{Table/11-Max}] initiating a split of this range at key /Table/12 [r9]
I180817 22:55:02.714039 42622 storage/replica_command.go:282  [split,n1,s1,r9/1:/{Table/12-Max}] initiating a split of this range at key /Table/13 [r10]
I180817 22:55:02.811345 42805 storage/replica_command.go:282  [split,n1,s1,r10/1:/{Table/13-Max}] initiating a split of this range at key /Table/14 [r11]
I180817 22:55:02.988160 42767 storage/replica_command.go:282  [split,n1,s1,r11/1:/{Table/14-Max}] initiating a split of this range at key /Table/15 [r12]
I180817 22:55:03.015463 42763 sql/event_log.go:126  [n1,intExec=initializeClusterSecret] Event: "set_cluster_setting", target: 0, info: {SettingName:cluster.secret Value:gen_random_uuid()::STRING User:root}
I180817 22:55:03.140285 42776 storage/replica_command.go:282  [split,n1,s1,r12/1:/{Table/15-Max}] initiating a split of this range at key /Table/16 [r13]
I180817 22:55:03.220255 42750 sql/event_log.go:126  [n1,intExec=create-default-db] Event: "create_database", target: 50, info: {DatabaseName:defaultdb Statement:CREATE DATABASE IF NOT EXISTS defaultdb User:root}
I180817 22:55:03.333827 42779 sql/event_log.go:126  [n1,intExec=create-default-db] Event: "create_database", target: 51, info: {DatabaseName:postgres Statement:CREATE DATABASE IF NOT EXISTS postgres User:root}
I180817 22:55:03.359967 42793 storage/replica_command.go:282  [split,n1,s1,r13/1:/{Table/16-Max}] initiating a split of this range at key /Table/17 [r14]
I180817 22:55:03.434402 42207 server/server.go:1593  [n1] done ensuring all necessary migrations have run
I180817 22:55:03.434703 42207 server/server.go:1596  [n1] serving sql connections
I180817 22:55:03.488454 42885 server/server_update.go:67  [n1] no need to upgrade, cluster already at the newest version
I180817 22:55:03.559597 42887 sql/event_log.go:126  [n1] Event: "node_join", target: 1, info: {Descriptor:{NodeID:1 Address:{NetworkField:tcp AddressField:127.0.0.1:33715} Attrs: Locality: ServerVersion:2.0-11 BuildTag:v2.1.0-alpha.20180702-1740-g9f5ea5b StartedAt:1534546500519538998 LocalityAddress:[]} ClusterID:19cdccbb-629c-4f62-a24d-3c300d09ad1f StartedAt:1534546500519538998 LastUp:1534546500519538998}
I180817 22:55:03.603634 42891 storage/replica_command.go:282  [split,n1,s1,r14/1:/{Table/17-Max}] initiating a split of this range at key /Table/18 [r15]
I180817 22:55:03.732190 42881 storage/replica_command.go:282  [split,n1,s1,r15/1:/{Table/18-Max}] initiating a split of this range at key /Table/19 [r16]
I180817 22:55:03.878602 42948 storage/replica_command.go:282  [split,n1,s1,r16/1:/{Table/19-Max}] initiating a split of this range at key /Table/20 [r17]
I180817 22:55:04.019488 42829 storage/replica_command.go:282  [split,n1,s1,r17/1:/{Table/20-Max}] initiating a split of this range at key /Table/21 [r18]
I180817 22:55:04.103040 42937 storage/replica_command.go:282  [split,n1,s1,r18/1:/{Table/21-Max}] initiating a split of this range at key /Table/22 [r19]
I180817 22:55:04.270637 42980 storage/replica_command.go:282  [split,n1,s1,r19/1:/{Table/22-Max}] initiating a split of this range at key /Table/23 [r20]
I180817 22:55:04.487233 42991 storage/replica_command.go:282  [split,n1,s1,r20/1:/{Table/23-Max}] initiating a split of this range at key /Table/50 [r21]
I180817 22:55:04.715980 43001 storage/replica_command.go:282  [split,n1,s1,r21/1:/{Table/50-Max}] initiating a split of this range at key /Table/51 [r22]
W180817 22:55:04.990987 42207 server/status/runtime.go:294  [n?] Could not parse build timestamp: parsing time "" as "2006/01/02 15:04:05": cannot parse "" as "2006"
I180817 22:55:05.194803 42207 server/server.go:829  [n?] monitoring forward clock jumps based on server.clock.forward_jump_check_enabled
I180817 22:55:05.195490 42207 base/addr_validation.go:260  [n?] server certificate addresses: IP=127.0.0.1,::1; DNS=localhost,*.local; CN=node
I180817 22:55:05.195686 42207 base/addr_validation.go:300  [n?] web UI certificate addresses: IP=127.0.0.1,::1; DNS=localhost,*.local; CN=node
I180817 22:55:05.201348 42207 server/config.go:496  [n?] 1 storage engine initialized
I180817 22:55:05.201508 42207 server/config.go:499  [n?] RocksDB cache size: 128 MiB
I180817 22:55:05.201564 42207 server/config.go:499  [n?] store 0: in-memory, size 0 B
W180817 22:55:05.201948 42207 gossip/gossip.go:1351  [n?] no incoming or outgoing connections
I180817 22:55:05.202634 42207 server/server.go:1402  [n?] no stores bootstrapped and --join flag specified, awaiting init command.
I180817 22:55:05.412398 43109 gossip/client.go:129  [n?] started gossip client to 127.0.0.1:33715
I180817 22:55:05.415127 43009 gossip/server.go:219  [n1] received initial cluster-verification connection from {tcp 127.0.0.1:41623}
I180817 22:55:05.423071 42207 server/node.go:697  [n?] connecting to gossip network to verify cluster ID...
I180817 22:55:05.423391 42207 server/node.go:722  [n?] node connected via gossip and verified as part of cluster "19cdccbb-629c-4f62-a24d-3c300d09ad1f"
I180817 22:55:05.429271 42971 kv/dist_sender.go:345  [n?] unable to determine this node's attributes for replica selection; node is most likely bootstrapping
I180817 22:55:05.436229 42970 kv/dist_sender.go:345  [n?] unable to determine this node's attributes for replica selection; node is most likely bootstrapping
I180817 22:55:05.443811 42207 kv/dist_sender.go:345  [n?] unable to determine this node's attributes for replica selection; node is most likely bootstrapping
I180817 22:55:05.453860 42207 server/node.go:428  [n?] new node allocated ID 2
I180817 22:55:05.455504 42207 gossip/gossip.go:383  [n2] NodeDescriptor set to node_id:2 address:<network_field:"tcp" address_field:"127.0.0.1:41623" > attrs:<> locality:<> ServerVersion:<major_val:2 minor_val:0 patch:0 unstable:11 > build_tag:"v2.1.0-alpha.20180702-1740-g9f5ea5b" started_at:1534546505454132496 
I180817 22:55:05.462440 43126 storage/stores.go:261  [n1] wrote 1 node addresses to persistent storage
I180817 22:55:05.463378 42207 storage/stores.go:242  [n2] read 0 node addresses from persistent storage
I180817 22:55:05.464558 42207 storage/stores.go:261  [n2] wrote 1 node addresses to persistent storage
I180817 22:55:05.525019 42207 server/node.go:672  [n2] bootstrapped store [n2,s2]
I180817 22:55:05.538480 42207 server/node.go:546  [n2] node=2: started with [] engine(s) and attributes []
I180817 22:55:05.542430 42207 server/status/recorder.go:652  [n2] available memory from cgroups (8.0 EiB) exceeds system memory 16 GiB, using system memory
I180817 22:55:05.542617 42207 server/server.go:1806  [n2] Could not start heap profiler worker due to: directory to store profiles could not be determined
I180817 22:55:05.565719 42207 server/server.go:1537  [n2] starting https server at 127.0.0.1:38155 (use: 127.0.0.1:38155)
I180817 22:55:05.565902 42207 server/server.go:1539  [n2] starting grpc/postgres server at 127.0.0.1:41623
I180817 22:55:05.565980 42207 server/server.go:1540  [n2] advertising CockroachDB node at 127.0.0.1:41623
I180817 22:55:05.591092 42207 server/server.go:1593  [n2] done ensuring all necessary migrations have run
I180817 22:55:05.591249 42207 server/server.go:1596  [n2] serving sql connections
I180817 22:55:05.593638 43268 storage/replica_raftstorage.go:529  [replicate,n1,s1,r15/1:/Table/1{8-9}] generated preemptive snapshot 558b0bcf at index 18
I180817 22:55:05.602695 43234 server/status/recorder.go:652  [n2,summaries] available memory from cgroups (8.0 EiB) exceeds system memory 16 GiB, using system memory
I180817 22:55:05.730487 43146 rpc/nodedialer/nodedialer.go:92  [ct-client] connection to n1 established
W180817 22:55:05.901429 42207 server/status/runtime.go:294  [n?] Could not parse build timestamp: parsing time "" as "2006/01/02 15:04:05": cannot parse "" as "2006"
I180817 22:55:05.926845 43286 sql/event_log.go:126  [n2] Event: "node_join", target: 2, info: {Descriptor:{NodeID:2 Address:{NetworkField:tcp AddressField:127.0.0.1:41623} Attrs: Locality: ServerVersion:2.0-11 BuildTag:v2.1.0-alpha.20180702-1740-g9f5ea5b StartedAt:1534546505454132496 LocalityAddress:[]} ClusterID:19cdccbb-629c-4f62-a24d-3c300d09ad1f StartedAt:1534546505454132496 LastUp:1534546505454132496}
I180817 22:55:05.928448 43284 server/server_update.go:67  [n2] no need to upgrade, cluster already at the newest version
I180817 22:55:05.945714 43268 rpc/nodedialer/nodedialer.go:92  [replicate,n1,s1,r15/1:/Table/1{8-9}] connection to n2 established
I180817 22:55:05.946962 43138 rpc/nodedialer/nodedialer.go:92  [ct-client] connection to n2 established
I180817 22:55:05.954396 43268 storage/store_snapshot.go:655  [replicate,n1,s1,r15/1:/Table/1{8-9}] streamed snapshot to (n2,s2):?: kv pairs: 8, log entries: 8, rate-limit: 2.0 MiB/sec, 8ms
I180817 22:55:05.965536 43322 storage/replica_raftstorage.go:773  [n2,s2,r15/?:{-}] applying preemptive snapshot at index 18 (id=558b0bcf, encoded size=2418, 1 rocksdb batches, 8 log entries)
I180817 22:55:05.969421 43322 storage/replica_raftstorage.go:779  [n2,s2,r15/?:/Table/1{8-9}] applied preemptive snapshot in 4ms [clear=0ms batch=0ms entries=2ms commit=1ms]
I180817 22:55:05.982731 43268 storage/replica_command.go:792  [replicate,n1,s1,r15/1:/Table/1{8-9}] change replicas (ADD_REPLICA (n2,s2):2): read existing descriptor r15:/Table/1{8-9} [(n1,s1):1, next=2, gen=842378238424]
I180817 22:55:06.052985 42207 server/server.go:829  [n?] monitoring forward clock jumps based on server.clock.forward_jump_check_enabled
I180817 22:55:06.063404 43268 storage/replica.go:3752  [n1,s1,r15/1:/Table/1{8-9}] proposing ADD_REPLICA((n2,s2):2): updated=[(n1,s1):1 (n2,s2):2] next=3
I180817 22:55:06.078056 42207 base/addr_validation.go:260  [n?] server certificate addresses: IP=127.0.0.1,::1; DNS=localhost,*.local; CN=node
I180817 22:55:06.080305 42207 base/addr_validation.go:300  [n?] web UI certificate addresses: IP=127.0.0.1,::1; DNS=localhost,*.local; CN=node
I180817 22:55:06.109869 42207 server/config.go:496  [n?] 1 storage engine initialized
I180817 22:55:06.111967 42207 server/config.go:499  [n?] RocksDB cache size: 128 MiB
I180817 22:55:06.112154 42207 server/config.go:499  [n?] store 0: in-memory, size 0 B
W180817 22:55:06.112531 42207 gossip/gossip.go:1351  [n?] no incoming or outgoing connections
I180817 22:55:06.113754 42207 server/server.go:1402  [n?] no stores bootstrapped and --join flag specified, awaiting init command.
I180817 22:55:06.154437 43337 storage/replica_range_lease.go:554  [replicate,n1,s1,r15/1:/Table/1{8-9}] transferring lease to s2
I180817 22:55:06.176286 43337 storage/replica_range_lease.go:617  [replicate,n1,s1,r15/1:/Table/1{8-9}] done transferring lease to s2: <nil>
I180817 22:55:06.177775 43157 storage/replica_proposal.go:214  [n2,s2,r15/2:/Table/1{8-9}] new range lease repl=(n2,s2):2 seq=3 start=1534546506.157087507,0 epo=1 pro=1534546506.157104055,0 following repl=(n1,s1):1 seq=2 start=1534546500.545227719,0 exp=1534546509.570492849,0 pro=1534546500.570597902,0
I180817 22:55:06.190020 43339 storage/replica_raftstorage.go:529  [replicate,n1,s1,r10/1:/Table/1{3-4}] generated preemptive snapshot 2c156626 at index 44
I180817 22:55:06.197044 43339 storage/store_snapshot.go:655  [replicate,n1,s1,r10/1:/Table/1{3-4}] streamed snapshot to (n2,s2):?: kv pairs: 139, log entries: 34, rate-limit: 2.0 MiB/sec, 6ms
I180817 22:55:06.200228 43282 storage/replica_raftstorage.go:773  [n2,s2,r10/?:{-}] applying preemptive snapshot at index 44 (id=2c156626, encoded size=37108, 1 rocksdb batches, 34 log entries)
I180817 22:55:06.237392 43282 storage/replica_raftstorage.go:779  [n2,s2,r10/?:/Table/1{3-4}] applied preemptive snapshot in 34ms [clear=0ms batch=0ms entries=25ms commit=8ms]
I180817 22:55:06.247773 43339 storage/replica_command.go:792  [replicate,n1,s1,r10/1:/Table/1{3-4}] change replicas (ADD_REPLICA (n2,s2):2): read existing descriptor r10:/Table/1{3-4} [(n1,s1):1, next=2, gen=842424697968]
I180817 22:55:06.271433 43443 storage/replica_command.go:792  [replicate,n2,s2,r15/2:/Table/1{8-9}] change replicas (REMOVE_REPLICA (n1,s1):1): read existing descriptor r15:/Table/1{8-9} [(n1,s1):1, (n2,s2):2, next=3, gen=842387542576]
I180817 22:55:06.292141 43339 storage/replica.go:3752  [n1,s1,r10/1:/Table/1{3-4}] proposing ADD_REPLICA((n2,s2):2): updated=[(n1,s1):1 (n2,s2):2] next=3
I180817 22:55:06.370962 43460 storage/replica_range_lease.go:554  [replicate,n1,s1,r10/1:/Table/1{3-4}] transferring lease to s2
I180817 22:55:06.400422 43460 storage/replica_range_lease.go:617  [replicate,n1,s1,r10/1:/Table/1{3-4}] done transferring lease to s2: <nil>
I180817 22:55:06.405948 43326 gossip/client.go:129  [n?] started gossip client to 127.0.0.1:33715
I180817 22:55:06.409068 43455 gossip/server.go:219  [n1] received initial cluster-verification connection from {tcp 127.0.0.1:34981}
I180817 22:55:06.412762 43475 storage/replica_raftstorage.go:529  [replicate,n1,s1,r5/1:/System/ts{d-e}] generated preemptive snapshot 05981a9c at index 24
I180817 22:55:06.423681 42207 server/node.go:697  [n?] connecting to gossip network to verify cluster ID...
I180817 22:55:06.429186 42207 server/node.go:722  [n?] node connected via gossip and verified as part of cluster "19cdccbb-629c-4f62-a24d-3c300d09ad1f"
I180817 22:55:06.434411 43510 kv/dist_sender.go:345  [n?] unable to determine this node's attributes for replica selection; node is most likely bootstrapping
I180817 22:55:06.440220 43509 kv/dist_sender.go:345  [n?] unable to determine this node's attributes for replica selection; node is most likely bootstrapping
I180817 22:55:06.445728 43171 storage/replica_proposal.go:214  [n2,s2,r10/2:/Table/1{3-4}] new range lease repl=(n2,s2):2 seq=3 start=1534546506.371125787,0 epo=1 pro=1534546506.371140906,0 following repl=(n1,s1):1 seq=2 start=1534546500.545227719,0 exp=1534546509.570492849,0 pro=1534546500.570597902,0
I180817 22:55:06.447954 42207 kv/dist_sender.go:345  [n?] unable to determine this node's attributes for replica selection; node is most likely bootstrapping
I180817 22:55:06.457428 42207 server/node.go:428  [n?] new node allocated ID 3
I180817 22:55:06.458194 42207 gossip/gossip.go:383  [n3] NodeDescriptor set to node_id:3 address:<network_field:"tcp" address_field:"127.0.0.1:34981" > attrs:<> locality:<> ServerVersion:<major_val:2 minor_val:0 patch:0 unstable:11 > build_tag:"v2.1.0-alpha.20180702-1740-g9f5ea5b" started_at:1534546506457735851 
I180817 22:55:06.459267 42207 storage/stores.go:242  [n3] read 0 node addresses from persistent storage
I180817 22:55:06.462436 43523 storage/stores.go:261  [n1] wrote 2 node addresses to persistent storage
I180817 22:55:06.475024 42207 storage/stores.go:261  [n3] wrote 2 node addresses to persistent storage
I180817 22:55:06.489532 43356 storage/stores.go:261  [n2] wrote 2 node addresses to persistent storage
I180817 22:55:06.518120 43475 storage/store_snapshot.go:655  [replicate,n1,s1,r5/1:/System/ts{d-e}] streamed snapshot to (n2,s2):?: kv pairs: 931, log entries: 14, rate-limit: 2.0 MiB/sec, 104ms
I180817 22:55:06.532362 43458 storage/replica_raftstorage.go:773  [n2,s2,r5/?:{-}] applying preemptive snapshot at index 24 (id=05981a9c, encoded size=153190, 1 rocksdb batches, 14 log entries)
I180817 22:55:06.584089 43458 storage/replica_raftstorage.go:779  [n2,s2,r5/?:/System/ts{d-e}] applied preemptive snapshot in 51ms [clear=0ms batch=1ms entries=48ms commit=1ms]
I180817 22:55:06.602444 43475 storage/replica_command.go:792  [replicate,n1,s1,r5/1:/System/ts{d-e}] change replicas (ADD_REPLICA (n2,s2):2): read existing descriptor r5:/System/ts{d-e} [(n1,s1):1, next=2, gen=842395428336]
I180817 22:55:06.624504 43443 storage/replica.go:3752  [n2,s2,r15/2:/Table/1{8-9}] proposing REMOVE_REPLICA((n1,s1):1): updated=[(n2,s2):2] next=3
I180817 22:55:06.669057 42207 server/node.go:672  [n3] bootstrapped store [n3,s3]
I180817 22:55:06.694872 42207 server/node.go:546  [n3] node=3: started with [] engine(s) and attributes []
I180817 22:55:06.696422 42207 server/status/recorder.go:652  [n3] available memory from cgroups (8.0 EiB) exceeds system memory 16 GiB, using system memory
I180817 22:55:06.696595 42207 server/server.go:1806  [n3] Could not start heap profiler worker due to: directory to store profiles could not be determined
I180817 22:55:06.699044 42207 server/server.go:1537  [n3] starting https server at 127.0.0.1:36077 (use: 127.0.0.1:36077)
I180817 22:55:06.699199 42207 server/server.go:1539  [n3] starting grpc/postgres server at 127.0.0.1:34981
I180817 22:55:06.699261 42207 server/server.go:1540  [n3] advertising CockroachDB node at 127.0.0.1:34981
W180817 22:55:06.699555 42207 jobs/registry.go:308  [n3] unable to get node liveness: node not in the liveness table
I180817 22:55:06.705449 43642 server/status/recorder.go:652  [n3,summaries] available memory from cgroups (8.0 EiB) exceeds system memory 16 GiB, using system memory
I180817 22:55:06.706453 43353 storage/store.go:3663  [n1,s1,r15/1:/Table/1{8-9}] added to replica GC queue (peer suggestion)
I180817 22:55:06.709560 43408 storage/replica_command.go:792  [replicate,n2,s2,r10/2:/Table/1{3-4}] change replicas (REMOVE_REPLICA (n1,s1):1): read existing descriptor r10:/Table/1{3-4} [(n1,s1):1, (n2,s2):2, next=3, gen=842395280920]
I180817 22:55:06.735635 42207 server/server.go:1593  [n3] done ensuring all necessary migrations have run
I180817 22:55:06.735861 42207 server/server.go:1596  [n3] serving sql connections
I180817 22:55:06.740028 43608 storage/store.go:2563  [replicaGC,n2,s2,r5/?:/System/ts{d-e}] removing replica
I180817 22:55:06.746473 43589 storage/store.go:2563  [replicaGC,n1,s1,r15/1:/Table/1{8-9}] removing replica
I180817 22:55:06.748688 43589 storage/replica.go:883  [replicaGC,n1,s1,r15/1:/Table/1{8-9}] removed 7 (0+7) keys in 1ms [clear=0ms commit=0ms]
I180817 22:55:06.748705 43608 storage/replica.go:883  [replicaGC,n2,s2,r5/?:/System/ts{d-e}] removed 468 (461+7) keys in 5ms [clear=0ms commit=5ms]
I180817 22:55:06.831940 43475 storage/replica.go:3752  [n1,s1,r5/1:/System/ts{d-e}] proposing ADD_REPLICA((n2,s2):2): updated=[(n1,s1):1 (n2,s2):2] next=3
I180817 22:55:06.848349 43678 rpc/nodedialer/nodedialer.go:92  [ct-client] connection to n1 established
I180817 22:55:06.997765 43710 storage/replica_raftstorage.go:529  [raftsnapshot,n1,s1,r5/1:/System/ts{d-e}] generated Raft snapshot 88828dae at index 26
I180817 22:55:07.156231 43710 storage/store_snapshot.go:655  [raftsnapshot,n1,s1,r5/1:/System/ts{d-e}] streamed snapshot to (n2,s2):2: kv pairs: 933, log entries: 16, rate-limit: 8.0 MiB/sec, 141ms
I180817 22:55:07.160356 43785 storage/replica_raftstorage.go:773  [n2,s2,r5/2:{-}] applying Raft snapshot at index 26 (id=88828dae, encoded size=154658, 1 rocksdb batches, 16 log entries)
I180817 22:55:07.177143 43408 storage/replica.go:3752  [n2,s2,r10/2:/Table/1{3-4}] proposing REMOVE_REPLICA((n1,s1):1): updated=[(n2,s2):2] next=3
I180817 22:55:07.180918 43670 sql/event_log.go:126  [n3] Event: "node_join", target: 3, info: {Descriptor:{NodeID:3 Address:{NetworkField:tcp AddressField:127.0.0.1:34981} Attrs: Locality: ServerVersion:2.0-11 BuildTag:v2.1.0-alpha.20180702-1740-g9f5ea5b StartedAt:1534546506457735851 LocalityAddress:[]} ClusterID:19cdccbb-629c-4f62-a24d-3c300d09ad1f StartedAt:1534546506457735851 LastUp:1534546506457735851}
I180817 22:55:07.219639 43353 storage/store.go:3663  [n1,s1,r10/1:/Table/1{3-4}] added to replica GC queue (peer suggestion)
I180817 22:55:07.225750 43668 server/server_update.go:67  [n3] no need to upgrade, cluster already at the newest version
I180817 22:55:07.226575 43569 rpc/nodedialer/nodedialer.go:92  [ct-client] connection to n3 established
I180817 22:55:07.249613 43790 storage/store.go:2563  [replicaGC,n1,s1,r10/1:/Table/1{3-4}] removing replica
I180817 22:55:07.255786 43679 rpc/nodedialer/nodedialer.go:92  [ct-client] connection to n2 established
I180817 22:55:07.265763 43790 storage/replica.go:883  [replicaGC,n1,s1,r10/1:/Table/1{3-4}] removed 158 (151+7) keys in 1ms [clear=1ms commit=0ms]
I180817 22:55:07.296543 43785 storage/replica_raftstorage.go:779  [n2,s2,r5/2:/System/ts{d-e}] applied Raft snapshot in 136ms [clear=1ms batch=1ms entries=124ms commit=10ms]
I180817 22:55:07.321126 43867 storage/replica_range_lease.go:554  [replicate,n1,s1,r5/1:/System/ts{d-e}] transferring lease to s2
I180817 22:55:07.394917 43867 storage/replica_range_lease.go:617  [replicate,n1,s1,r5/1:/System/ts{d-e}] done transferring lease to s2: <nil>
I180817 22:55:07.397553 43875 storage/replica_raftstorage.go:529  [replicate,n1,s1,r12/1:/Table/1{5-6}] generated preemptive snapshot 4bad3f3d at index 18
I180817 22:55:07.437475 43875 rpc/nodedialer/nodedialer.go:92  [replicate,n1,s1,r12/1:/Table/1{5-6}] connection to n3 established
I180817 22:55:07.438433 43776 rpc/nodedialer/nodedialer.go:92  [ct-client] connection to n3 established
I180817 22:55:07.451839 43875 storage/store_snapshot.go:655  [replicate,n1,s1,r12/1:/Table/1{5-6}] streamed snapshot to (n3,s3):?: kv pairs: 8, log entries: 8, rate-limit: 2.0 MiB/sec, 13ms
I180817 22:55:07.455108 43924 storage/replica_raftstorage.go:773  [n3,s3,r12/?:{-}] applying preemptive snapshot at index 18 (id=4bad3f3d, encoded size=2248, 1 rocksdb batches, 8 log entries)
I180817 22:55:07.464622 43924 storage/replica_raftstorage.go:779  [n3,s3,r12/?:/Table/1{5-6}] applied preemptive snapshot in 3ms [clear=0ms batch=0ms entries=1ms commit=1ms]
I180817 22:55:07.481848 43875 storage/replica_command.go:792  [replicate,n1,s1,r12/1:/Table/1{5-6}] change replicas (ADD_REPLICA (n3,s3):2): read existing descriptor r12:/Table/1{5-6} [(n1,s1):1, next=2, gen=842372720736]
I180817 22:55:07.520285 43210 storage/replica_proposal.go:214  [n2,s2,r5/2:/System/ts{d-e}] new range lease repl=(n2,s2):2 seq=3 start=1534546507.321304107,0 epo=1 pro=1534546507.321320048,0 following repl=(n1,s1):1 seq=2 start=1534546500.545227719,0 exp=1534546509.570492849,0 pro=1534546500.570597902,0
I180817 22:55:07.534818 43877 storage/replica_command.go:282  [split,n1,s1,r22/1:/{Table/51-Max}] initiating a split of this range at key /Table/52 [r23]
I180817 22:55:07.564853 43584 sql/event_log.go:126  [n1,client=127.0.0.1:57826,user=root] Event: "create_database", target: 52, info: {DatabaseName:data Statement:CREATE DATABASE data User:root}
I180817 22:55:07.634653 43875 storage/replica.go:3752  [n1,s1,r12/1:/Table/1{5-6}] proposing ADD_REPLICA((n3,s3):2): updated=[(n1,s1):1 (n3,s3):2] next=3
I180817 22:55:07.712940 43911 storage/replica_command.go:792  [replicate,n2,s2,r5/2:/System/ts{d-e}] change replicas (REMOVE_REPLICA (n1,s1):1): read existing descriptor r5:/System/ts{d-e} [(n1,s1):1, (n2,s2):2, next=3, gen=842389542560]
I180817 22:55:07.736906 43900 storage/replica_range_lease.go:554  [replicate,n1,s1,r12/1:/Table/1{5-6}] transferring lease to s3
I180817 22:55:07.748722 43900 storage/replica_range_lease.go:617  [replicate,n1,s1,r12/1:/Table/1{5-6}] done transferring lease to s3: <nil>
I180817 22:55:07.751851 43467 storage/replica_proposal.go:214  [n3,s3,r12/2:/Table/1{5-6}] new range lease repl=(n3,s3):2 seq=3 start=1534546507.738273785,0 epo=1 pro=1534546507.738292822,0 following repl=(n1,s1):1 seq=2 start=1534546500.545227719,0 exp=1534546509.570492849,0 pro=1534546500.570597902,0
I180817 22:55:07.758426 43902 storage/replica_raftstorage.go:529  [replicate,n1,s1,r3/1:/System/NodeLiveness{-Max}] generated preemptive snapshot c953487c at index 26
I180817 22:55:07.766331 43902 storage/store_snapshot.go:655  [replicate,n1,s1,r3/1:/System/NodeLiveness{-Max}] streamed snapshot to (n3,s3):?: kv pairs: 12, log entries: 16, rate-limit: 2.0 MiB/sec, 4ms
I180817 22:55:07.799545 43920 storage/replica_raftstorage.go:773  [n3,s3,r3/?:{-}] applying preemptive snapshot at index 26 (id=c953487c, encoded size=5443, 1 rocksdb batches, 16 log entries)
I180817 22:55:07.815824 43920 storage/replica_raftstorage.go:779  [n3,s3,r3/?:/System/NodeLiveness{-Max}] applied preemptive snapshot in 16ms [clear=0ms batch=0ms entries=9ms commit=6ms]
I180817 22:55:07.879832 43902 storage/replica_command.go:792  [replicate,n1,s1,r3/1:/System/NodeLiveness{-Max}] change replicas (ADD_REPLICA (n3,s3):2): read existing descriptor r3:/System/NodeLiveness{-Max} [(n1,s1):1, next=2, gen=842353447960]
I180817 22:55:07.910175 43825 storage/replica_command.go:792  [replicate,n3,s3,r12/2:/Table/1{5-6}] change replicas (REMOVE_REPLICA (n1,s1):1): read existing descriptor r12:/Table/1{5-6} [(n1,s1):1, (n3,s3):2, next=3, gen=842421267424]
I180817 22:55:07.924925 43584 sql/event_log.go:126  [n1,client=127.0.0.1:57826,user=root] Event: "set_cluster_setting", target: 0, info: {SettingName:server.declined_reservation_timeout Value:0s User:root}
I180817 22:55:07.954878 43911 storage/replica.go:3752  [n2,s2,r5/2:/System/ts{d-e}] proposing REMOVE_REPLICA((n1,s1):1): updated=[(n2,s2):2] next=3
I180817 22:55:08.021744 43353 storage/store.go:3663  [n1,s1,r5/1:/System/ts{d-e}] added to replica GC queue (peer suggestion)
I180817 22:55:08.040376 43902 storage/replica.go:3752  [n1,s1,r3/1:/System/NodeLiveness{-Max}] proposing ADD_REPLICA((n3,s3):2): updated=[(n1,s1):1 (n3,s3):2] next=3
I180817 22:55:08.067251 44009 storage/store.go:2563  [replicaGC,n1,s1,r5/1:/System/ts{d-e}] removing replica
I180817 22:55:08.071625 44009 storage/replica.go:883  [replicaGC,n1,s1,r5/1:/System/ts{d-e}] removed 469 (461+8) keys in 1ms [clear=1ms commit=0ms]
I180817 22:55:08.181769 44039 storage/replica_range_lease.go:554  [replicate,n1,s1,r3/1:/System/NodeLiveness{-Max}] transferring lease to s3
I180817 22:55:08.185661 43584 sql/event_log.go:126  [n1,client=127.0.0.1:57826,user=root] Event: "set_cluster_setting", target: 0, info: {SettingName:server.failed_reservation_timeout Value:0s User:root}
I180817 22:55:08.207351 44039 storage/replica_range_lease.go:617  [replicate,n1,s1,r3/1:/System/NodeLiveness{-Max}] done transferring lease to s3: <nil>
I180817 22:55:08.242420 43825 storage/replica.go:3752  [n3,s3,r12/2:/Table/1{5-6}] proposing REMOVE_REPLICA((n1,s1):1): updated=[(n3,s3):2] next=3
I180817 22:55:08.254916 43887 storage/store.go:3663  [n1,s1,r12/1:/Table/1{5-6}] added to replica GC queue (peer suggestion)
--- FAIL: testrace/TestRepartitioning: TestRepartitioning/single_col_range_partitioning_-_MAXVALUE/single_col_range_partitioning (56.160s)
soon.go:49: condition failed to evaluate within 45s: expected to scan on n3: SELECT count(*) FROM "single col range partitioning" WHERE a >= 3 AND a < 4
    [n2,s2,r7/2:/Table/{SystemCon…-11}] read completed
    full trace:
       === SPAN START: session recording ===
      [n1,client=127.0.0.1:57826,user=root] [NoTxn pos:3611] executing ExecStmt: SELECT count(*) FROM "single col range partitioning" WHERE (a >= 3) AND (a < 4)
      [n1,client=127.0.0.1:57826,user=root] executing: SELECT count(*) FROM "single col range partitioning" WHERE (a >= 3) AND (a < 4) in state: NoTxn
       === SPAN START: sql txn ===
      [n1,client=127.0.0.1:57826,user=root] [Open pos:3611] executing ExecStmt: SELECT count(*) FROM "single col range partitioning" WHERE (a >= 3) AND (a < 4)
      [n1,client=127.0.0.1:57826,user=root] executing: SELECT count(*) FROM "single col range partitioning" WHERE (a >= 3) AND (a < 4) in state: Open
      [n1,client=127.0.0.1:57826,user=root] planning starts: SELECT
      [n1,client=127.0.0.1:57826,user=root] generating optimizer plan
       === SPAN START: txn coordinator send ===
       === SPAN START: dist sender send ===
      [client=127.0.0.1:57826,user=root,txn=0b2e07e6,n1] querying next range at /Table/2/1/77/"single col range partitioning"/3/1
      [client=127.0.0.1:57826,user=root,txn=0b2e07e6,n1] r7: sending batch 1 Get to (n2,s2):2
      [client=127.0.0.1:57826,user=root,txn=0b2e07e6,n1] sending request to 127.0.0.1:41623
       === SPAN START: /cockroach.roachpb.Internal/Batch ===
       === SPAN START: /cockroach.roachpb.Internal/Batch ===
      [n2] 1 Get
      [n2,s2] executing 1 requests
      [n2,s2,r7/2:/Table/{SystemCon…-11}] read-only path
      [n2,s2,r7/2:/Table/{SystemCon…-11}] command queue
      [n2,s2,r7/2:/Table/{SystemCon…-11}] waiting for read lock
      [n2,s2,r7/2:/Table/{SystemCon…-11}] read completed
       === SPAN START: txn coordinator send ===
      [n1,client=127.0.0.1:57826,user=root] client.Txn did AutoCommit. err: <nil>

      [n1,client=127.0.0.1:57826,user=root] added table 'data.public."single col range partitioning"' to table collection
      [n1,client=127.0.0.1:57826,user=root] optimizer plan succeeded
      [n1,client=127.0.0.1:57826,user=root] planning ends
      [n1,client=127.0.0.1:57826,user=root] checking distributability
      [n1,client=127.0.0.1:57826,user=root] distributable plan: true
      [n1,client=127.0.0.1:57826,user=root] execution starts: distributed
       === SPAN START: consuming rows ===
      [n1,client=127.0.0.1:57826,user=root] creating DistSQL plan with distributedMode=true
      [n1,client=127.0.0.1:57826,user=root] querying next range at /Table/78/1/3
      [n1,client=127.0.0.1:57826,user=root] running DistSQL plan
       === SPAN START: /cockroach.sql.distsqlrun.DistSQL/SetupFlow ===
       === SPAN START: outbox ===
    cockroach.stat.outbox.bytes_sent: ߂
    cockroach.streamid: 1
       === SPAN START: [async] drain ===
       === SPAN START: table reader ===
    cockroach.processorid: 0
    cockroach.stat.tablereader.input.rows: 0
    cockroach.stat.tablereader.stalltime: 2µs
      [n3] Scan /Table/78/1/3{-/#}
       === SPAN START: txn coordinator send ===
       === SPAN START: dist sender send ===
      [txn=0b5e7a55,n3] querying next range at /Table/78/1/3
      [txn=0b5e7a55,n3] r38: sending batch 1 Scan to (n3,s3):2
      [txn=0b5e7a55,n3] sending request to local server
       === SPAN START: /cockroach.roachpb.Internal/Batch ===
      [n3] 1 Scan
      [n3,s3] executing 1 requests
      [n3,s3,r38/2:/Table/78/1/{3-4}] read-only path
      [n3,s3,r38/2:/Table/78/1/{3-4}] command queue
      [n3,s3,r38/2:/Table/78/1/{3-4}] waiting for read lock
      [n3,s3,r38/2:/Table/78/1/{3-4}] read completed
       === SPAN START: ordered aggregator ===
    cockroach.processorid: 1
    cockroach.stat.aggregator.input.rows: 0
    cockroach.stat.aggregator.mem.max: 10 KiB
    cockroach.stat.aggregator.stalltime: 346µs
      [n3] accumulation complete
       === SPAN START: flow ===
      [n1,client=127.0.0.1:57826,user=root] starting (1 processors, 0 startables)
       === SPAN START: noop ===
    cockroach.processorid: 2
      [n3] Consumer sent handshake. Consuming flow scheduled: false
      [n3] Consumer sent handshake. Consuming flow scheduled: true
      [n1,client=127.0.0.1:57826,user=root] execution ends
      [n1,client=127.0.0.1:57826,user=root] rows affected: 1
       === SPAN START: txn coordinator send ===
      [n1,client=127.0.0.1:57826,user=root] AutoCommit. err: <nil>
      [n1,client=127.0.0.1:57826,user=root] releasing 1 tables
      [n1,client=127.0.0.1:57826,user=root] [NoTxn pos:3612] executing ExecStmt: SET TRACING = off
      [n1,client=127.0.0.1:57826,user=root] executing: SET TRACING = off in state: NoTxn
    goroutine 69221 [running]:
    runtime/debug.Stack(0xa7a358200, 0xc4206ceea0, 0x3d9f000)
        /usr/local/go/src/runtime/debug/stack.go:24 +0xb5
    github.com/cockroachdb/cockroach/pkg/testutils.SucceedsSoon(0x3e00420, 0xc421113a40, 0xc4206cee10)
        /go/src/github.com/cockroachdb/cockroach/pkg/testutils/soon.go:50 +0x172
    github.com/cockroachdb/cockroach/pkg/ccl/partitionccl.TestRepartitioning.func1(0xc421113a40)
        /go/src/github.com/cockroachdb/cockroach/pkg/ccl/partitionccl/partition_test.go:1331 +0x1280
    testing.tRunner(0xc421113a40, 0xc42392a750)
        /usr/local/go/src/testing/testing.go:777 +0x16e
    created by testing.(*T).Run
        /usr/local/go/src/testing/testing.go:824 +0x565
------- Stdout: -------
I180817 22:57:22.388986 43584 sql/lease.go:345  [n1,client=127.0.0.1:57826,user=root] publish: descID=76 (single col range partitioning - MAXVALUE) version=5 mtime=2018-08-17 22:57:22.269732859 +0000 UTC
I180817 22:57:22.709847 43584 sql/event_log.go:126  [n1,client=127.0.0.1:57826,user=root] Event: "drop_database", target: 75, info: {DatabaseName:data Statement:DROP DATABASE IF EXISTS data User:root DroppedSchemaObjects:[data.public."single col range partitioning - MAXVALUE"]}
I180817 22:57:23.135902 43584 sql/lease.go:345  [n1,client=127.0.0.1:57826,user=root,scExec=] publish: descID=76 (single col range partitioning - MAXVALUE) version=6 mtime=2018-08-17 22:57:23.127384724 +0000 UTC
I180817 22:57:23.582499 43584 sql/event_log.go:126  [n1,client=127.0.0.1:57826,user=root] Event: "create_database", target: 77, info: {DatabaseName:data Statement:CREATE DATABASE data User:root}
I180817 22:57:23.638541 69402 storage/replica_command.go:282  [split,n2,s2,r142/1:/{Table/76/2-Max}] initiating a split of this range at key /Table/77 [r146]
I180817 22:57:23.788956 43206 storage/replica_proposal.go:214  [n2,s2,r142/1:/{Table/76/2-Max}] new range lease repl=(n2,s2):1 seq=6 start=1534546599.976797337,0 epo=1 pro=1534546599.976827457,0 following repl=(n2,s2):1 seq=6 start=1534546599.976797337,0 epo=1 pro=1534546599.976827457,0
I180817 22:57:23.884465 43584 sql/event_log.go:126  [n1,client=127.0.0.1:57826,user=root] Event: "create_table", target: 78, info: {TableName:data.public."single col range partitioning - MAXVALUE" Statement:CREATE TABLE "single col range partitioning - MAXVALUE" (a INT PRIMARY KEY) PARTITION BY RANGE (a) (PARTITION p4 VALUES FROM (MINVALUE) TO (4), PARTITION p5 VALUES FROM (4) TO (5), PARTITION px VALUES FROM (5) TO (MAXVALUE)) User:root}
I180817 22:57:23.921914 69343 storage/replica_command.go:282  [split,n2,s2,r146/1:/{Table/77-Max}] initiating a split of this range at key /Table/78 [r147]
I180817 22:57:24.126749 43174 storage/replica_proposal.go:214  [n2,s2,r146/1:/{Table/77-Max}] new range lease repl=(n2,s2):1 seq=6 start=1534546599.976797337,0 epo=1 pro=1534546599.976827457,0 following repl=(n2,s2):1 seq=6 start=1534546599.976797337,0 epo=1 pro=1534546599.976827457,0
I180817 22:57:24.356992 69525 storage/replica_command.go:282  [split,n2,s2,r147/1:/{Table/78-Max}] initiating a split of this range at key /Table/78/1 [r148]
I180817 22:57:24.357564 43584 sql/event_log.go:126  [n1,client=127.0.0.1:57826,user=root] Event: "set_zone_config", target: 78, info: {Target:data."single col range partitioning - MAXVALUE".p4 Config:constraints: [+n1] User:root}
I180817 22:57:24.620183 43157 storage/replica_proposal.go:214  [n2,s2,r147/1:/{Table/78-Max}] new range lease repl=(n2,s2):1 seq=6 start=1534546599.976797337,0 epo=1 pro=1534546599.976827457,0 following repl=(n2,s2):1 seq=6 start=1534546599.976797337,0 epo=1 pro=1534546599.976827457,0
I180817 22:57:24.628396 69546 storage/replica_command.go:282  [split,n2,s2,r148/1:/{Table/78/1-Max}] initiating a split of this range at key /Table/78/1/4 [r149]
I180817 22:57:24.788346 43176 storage/replica_proposal.go:214  [n2,s2,r148/1:/{Table/78/1-Max}] new range lease repl=(n2,s2):1 seq=6 start=1534546599.976797337,0 epo=1 pro=1534546599.976827457,0 following repl=(n2,s2):1 seq=6 start=1534546599.976797337,0 epo=1 pro=1534546599.976827457,0
I180817 22:57:24.800709 69532 storage/replica_raftstorage.go:529  [replicate,n2,s2,r148/1:/Table/78/1{-/4}] generated preemptive snapshot d69c970a at index 15
I180817 22:57:24.814364 69532 storage/store_snapshot.go:655  [replicate,n2,s2,r148/1:/Table/78/1{-/4}] streamed snapshot to (n1,s1):?: kv pairs: 8, log entries: 5, rate-limit: 2.0 MiB/sec, 11ms
I180817 22:57:24.829085 69535 storage/replica_raftstorage.go:773  [n1,s1,r148/?:{-}] applying preemptive snapshot at index 15 (id=d69c970a, encoded size=2251, 1 rocksdb batches, 5 log entries)
I180817 22:57:24.847579 69535 storage/replica_raftstorage.go:779  [n1,s1,r148/?:/Table/78/1{-/4}] applied preemptive snapshot in 18ms [clear=0ms batch=0ms entries=16ms commit=1ms]
I180817 22:57:24.853371 69532 storage/replica_command.go:792  [replicate,n2,s2,r148/1:/Table/78/1{-/4}] change replicas (ADD_REPLICA (n1,s1):2): read existing descriptor r148:/Table/78/1{-/4} [(n2,s2):1, next=2, gen=842415409872]
I180817 22:57:24.977121 43584 sql/event_log.go:126  [n1,client=127.0.0.1:57826,user=root] Event: "set_zone_config", target: 78, info: {Target:data."single col range partitioning - MAXVALUE".p5 Config:constraints: [+n2] User:root}
I180817 22:57:24.994582 69532 storage/replica.go:3752  [n2,s2,r148/1:/Table/78/1{-/4}] proposing ADD_REPLICA((n1,s1):2): updated=[(n2,s2):1 (n1,s1):2] next=3
I180817 22:57:25.021240 69667 storage/replica_command.go:282  [split,n2,s2,r149/1:/{Table/78/1/4-Max}] initiating a split of this range at key /Table/78/1/5 [r150]
I180817 22:57:25.123822 69627 storage/replica_range_lease.go:554  [replicate,n2,s2,r148/1:/Table/78/1{-/4}] transferring lease to s1
I180817 22:57:25.141778 42546 storage/replica_proposal.go:214  [n1,s1,r148/2:/Table/78/1{-/4}] new range lease repl=(n1,s1):2 seq=7 start=1534546645.125170967,0 epo=1 pro=1534546645.125206619,0 following repl=(n2,s2):1 seq=6 start=1534546599.976797337,0 epo=1 pro=1534546599.976827457,0
I180817 22:57:25.167671 69627 storage/replica_range_lease.go:617  [replicate,n2,s2,r148/1:/Table/78/1{-/4}] done transferring lease to s1: <nil>
I180817 22:57:25.179402 69630 storage/replica_raftstorage.go:529  [replicate,n2,s2,r147/1:/Table/78{-/1}] generated preemptive snapshot 5b624b1c at index 18
I180817 22:57:25.218335 43160 storage/replica_proposal.go:214  [n2,s2,r149/1:/{Table/78/1/4-Max}] new range lease repl=(n2,s2):1 seq=6 start=1534546599.976797337,0 epo=1 pro=1534546599.976827457,0 following repl=(n2,s2):1 seq=6 start=1534546599.976797337,0 epo=1 pro=1534546599.976827457,0
I180817 22:57:25.258260 69630 storage/store_snapshot.go:655  [replicate,n2,s2,r147/1:/Table/78{-/1}] streamed snapshot to (n3,s3):?: kv pairs: 8, log entries: 8, rate-limit: 2.0 MiB/sec, 72ms
I180817 22:57:25.270744 69716 storage/replica_raftstorage.go:773  [n3,s3,r147/?:{-}] applying preemptive snapshot at index 18 (id=5b624b1c, encoded size=2229, 1 rocksdb batches, 8 log entries)
I180817 22:57:25.274488 69716 storage/replica_raftstorage.go:779  [n3,s3,r147/?:/Table/78{-/1}] applied preemptive snapshot in 3ms [clear=0ms batch=0ms entries=2ms commit=0ms]
I180817 22:57:25.288405 69630 storage/replica_command.go:792  [replicate,n2,s2,r147/1:/Table/78{-/1}] change replicas (ADD_REPLICA (n3,s3):2): read existing descriptor r147:/Table/78{-/1} [(n2,s2):1, next=2, gen=842354814184]
I180817 22:57:25.423378 69630 storage/replica.go:3752  [n2,s2,r147/1:/Table/78{-/1}] proposing ADD_REPLICA((n3,s3):2): updated=[(n2,s2):1 (n3,s3):2] next=3
I180817 22:57:25.502425 69706 storage/replica_command.go:792  [replicate,n1,s1,r148/2:/Table/78/1{-/4}] change replicas (REMOVE_REPLICA (n2,s2):1): read existing descriptor r148:/Table/78/1{-/4} [(n2,s2):1, (n1,s1):2, next=3, gen=842429948152]
I180817 22:57:25.525690 69751 storage/replica_range_lease.go:554  [replicate,n2,s2,r147/1:/Table/78{-/1}] transferring lease to s3
I180817 22:57:25.546690 43232 server/status/runtime.go:433  [n2] runtime stats: 1.1 GiB RSS, 645 goroutines, 39 MiB/35 MiB/100 MiB GO alloc/idle/total, 95 MiB/137 MiB CGO alloc/total, 2779.02cgo/sec, 1.79/0.08 %(u/s)time, 0.02 %gc (14x)
I180817 22:57:25.565959 43555 storage/replica_proposal.go:214  [n3,s3,r147/2:/Table/78{-/1}] new range lease repl=(n3,s3):2 seq=7 start=1534546645.525885073,0 epo=1 pro=1534546645.525921302,0 following repl=(n2,s2):1 seq=6 start=1534546599.976797337,0 epo=1 pro=1534546599.976827457,0
I180817 22:57:25.586324 69751 storage/replica_range_lease.go:617  [replicate,n2,s2,r147/1:/Table/78{-/1}] done transferring lease to s3: <nil>
I180817 22:57:25.590846 69721 storage/replica_raftstorage.go:529  [replicate,n2,s2,r5/2:/System/ts{d-e}] generated preemptive snapshot 4f976c51 at index 115
I180817 22:57:25.729717 69706 storage/replica.go:3752  [n1,s1,r148/2:/Table/78/1{-/4}] proposing REMOVE_REPLICA((n2,s2):1): updated=[(n1,s1):2] next=3
I180817 22:57:25.757477 43386 storage/store.go:3663  [n2,s2,r148/1:/Table/78/1{-/4}] added to replica GC queue (peer suggestion)
I180817 22:57:25.772275 43584 sql/event_log.go:126  [n1,client=127.0.0.1:57826,user=root] Event: "set_zone_config", target: 78, info: {Target:data."single col range partitioning - MAXVALUE".px Config:constraints: [+n3] User:root}
I180817 22:57:25.773388 69221 ccl/partitionccl/partition_test.go:200  query: SELECT count(*) FROM "single col range partitioning - MAXVALUE" WHERE a < 4
I180817 22:57:25.780460 69803 storage/replica_command.go:282  [split,n2,s2,r150/1:/{Table/78/1/5-Max}] initiating a split of this range at key /Table/78/2 [r151]
I180817 22:57:25.797396 69755 storage/store.go:2563  [replicaGC,n2,s2,r148/1:/Table/78/1{-/4}] removing replica
I180817 22:57:25.816785 69755 storage/replica.go:883  [replicaGC,n2,s2,r148/1:/Table/78/1{-/4}] removed 7 (0+7) keys in 17ms [clear=15ms commit=2ms]
I180817 22:57:25.839959 69721 storage/store_snapshot.go:655  [replicate,n2,s2,r5/2:/System/ts{d-e}] streamed snapshot to (n3,s3):?: kv pairs: 1395, log entries: 2, rate-limit: 2.0 MiB/sec, 238ms
I180817 22:57:25.874268 69766 storage/replica_raftstorage.go:773  [n3,s3,r5/?:{-}] applying preemptive snapshot at index 115 (id=4f976c51, encoded size=382133, 2 rocksdb batches, 2 log entries)
I180817 22:57:25.877741 43234 server/status/recorder.go:652  [n2,summaries] available memory from cgroups (8.0 EiB) exceeds system memory 16 GiB, using system memory
I180817 22:57:25.941219 69766 storage/replica_raftstorage.go:779  [n3,s3,r5/?:/System/ts{d-e}] applied preemptive snapshot in 67ms [clear=0ms batch=2ms entries=43ms commit=21ms]
I180817 22:57:25.948870 69721 storage/replica_command.go:792  [replicate,n2,s2,r5/2:/System/ts{d-e}] change replicas (ADD_REPLICA (n3,s3):3): read existing descriptor r5:/System/ts{d-e} [(n2,s2):2, next=3, gen=842415411160]
I180817 22:57:26.091896 69721 storage/replica.go:3752  [n2,s2,r5/2:/System/ts{d-e}] proposing ADD_REPLICA((n3,s3):3): updated=[(n2,s2):2 (n3,s3):3] next=4
I180817 22:57:26.100072 43194 storage/replica_proposal.go:214  [n2,s2,r150/1:/{Table/78/1/5-Max}] new range lease repl=(n2,s2):1 seq=6 start=1534546599.976797337,0 epo=1 pro=1534546599.976827457,0 following repl=(n2,s2):1 seq=6 start=1534546599.976797337,0 epo=1 pro=1534546599.976827457,0
I180817 22:57:26.186146 69834 storage/replica_range_lease.go:554  [replicate,n2,s2,r5/2:/System/ts{d-e}] transferring lease to s3
I180817 22:57:26.197409 43534 storage/replica_proposal.go:214  [n3,s3,r5/3:/System/ts{d-e}] new range lease repl=(n3,s3):3 seq=4 start=1534546646.187484419,0 epo=1 pro=1534546646.187517546,0 following repl=(n2,s2):2 seq=3 start=1534546507.321304107,0 epo=1 pro=1534546507.321320048,0
I180817 22:57:26.201371 69834 storage/replica_range_lease.go:617  [replicate,n2,s2,r5/2:/System/ts{d-e}] done transferring lease to s3: <nil>
I180817 22:57:26.205413 69853 storage/replica_raftstorage.go:529  [replicate,n2,s2,r107/1:/Table/7{4/2-5}] generated preemptive snapshot fa832108 at index 18
I180817 22:57:26.211963 69853 storage/store_snapshot.go:655  [replicate,n2,s2,r107/1:/Table/7{4/2-5}] streamed snapshot to (n1,s1):?: kv pairs: 8, log entries: 8, rate-limit: 2.0 MiB/sec, 6ms
I180817 22:57:26.215359 69856 storage/replica_raftstorage.go:773  [n1,s1,r107/?:{-}] applying preemptive snapshot at index 18 (id=fa832108, encoded size=2222, 1 rocksdb batches, 8 log entries)
I180817 22:57:26.254086 69856 storage/replica_raftstorage.go:779  [n1,s1,r107/?:/Table/7{4/2-5}] applied preemptive snapshot in 38ms [clear=0ms batch=0ms entries=1ms commit=0ms]
I180817 22:57:26.265182 69853 storage/replica_command.go:792  [replicate,n2,s2,r107/1:/Table/7{4/2-5}] change replicas (ADD_REPLICA (n1,s1):2): read existing descriptor r107:/Table/7{4/2-5} [(n2,s2):1, next=2, gen=842353549032]
I180817 22:57:26.337643 69853 storage/replica.go:3752  [n2,s2,r107/1:/Table/7{4/2-5}] proposing ADD_REPLICA((n1,s1):2): updated=[(n2,s2):1 (n1,s1):2] next=3
I180817 22:57:26.357025 69909 storage/replica_command.go:792  [replicate,n3,s3,r147/2:/Table/78{-/1}] change replicas (REMOVE_REPLICA (n2,s2):1): read existing descriptor r147:/Table/78{-/1} [(n2,s2):1, (n3,s3):2, next=3, gen=842383429160]
I180817 22:57:26.422603 69842 storage/replica_range_lease.go:554  [replicate,n2,s2,r107/1:/Table/7{4/2-5}] transferring lease to s1
I180817 22:57:26.471459 69842 storage/replica_range_lease.go:617  [replicate,n2,s2,r107/1:/Table/7{4/2-5}] done transferring lease to s1: <nil>
I180817 22:57:26.498968 42529 storage/replica_proposal.go:214  [n1,s1,r107/2:/Table/7{4/2-5}] new range lease repl=(n1,s1):2 seq=7 start=1534546646.422776454,0 epo=1 pro=1534546646.422806308,0 following repl=(n2,s2):1 seq=6 start=1534546599.976797337,0 epo=1 pro=1534546599.976827457,0
I180817 22:57:26.509430 69988 storage/replica_raftstorage.go:529  [replicate,n2,s2,r150/1:/Table/78/{1/5-2}] generated preemptive snapshot 78586270 at index 18
I180817 22:57:26.514855 69988 storage/store_snapshot.go:655  [replicate,n2,s2,r150/1:/Table/78/{1/5-2}] streamed snapshot to (n3,s3):?: kv pairs: 8, log entries: 8, rate-limit: 2.0 MiB/sec, 5ms
I180817 22:57:26.520304 69990 storage/replica_raftstorage.go:773  [n3,s3,r150/?:{-}] applying preemptive snapshot at index 18 (id=78586270, encoded size=2267, 1 rocksdb batches, 8 log entries)
I180817 22:57:26.522891 69990 storage/replica_raftstorage.go:779  [n3,s3,r150/?:/Table/78/{1/5-2}] applied preemptive snapshot in 2ms [clear=0ms batch=0ms entries=1ms commit=0ms]
I180817 22:57:26.527647 69988 storage/replica_command.go:792  [replicate,n2,s2,r150/1:/Table/78/{1/5-2}] change replicas (ADD_REPLICA (n3,s3):2): read existing descriptor r150:/Table/78/{1/5-2} [(n2,s2):1, next=2, gen=842397805472]
I180817 22:57:26.663861 69909 storage/replica.go:3752  [n3,s3,r147/2:/Table/78{-/1}] proposing REMOVE_REPLICA((n2,s2):1): updated=[(n3,s3):2] next=3
I180817 22:57:26.677567 69988 storage/replica.go:3752  [n2,s2,r150/1:/Table/78/{1/5-2}] proposing ADD_REPLICA((n3,s3):2): updated=[(n2,s2):1 (n3,s3):2] next=3
I180817 22:57:26.682648 45785 storage/store.go:3663  [n2,s2,r147/1:/Table/78{-/1}] added to replica GC queue (peer suggestion)
I180817 22:57:26.697747 69221 ccl/partitionccl/partition_test.go:200  query: SELECT count(*) FROM "single col range partitioning - MAXVALUE" WHERE a < 4
I180817 22:57:26.751381 43640 server/status/runtime.go:433  [n3] runtime stats: 1.1 GiB RSS, 641 goroutines, 40 MiB/37 MiB/100 MiB GO alloc/idle/total, 98 MiB/141 MiB CGO alloc/total, 3027.06cgo/sec, 1.81/0.08 %(u/s)time, 0.02 %gc (14x)
I180817 22:57:26.775660 69962 storage/store.go:2563  [replicaGC,n2,s2,r147/1:/Table/78{-/1}] removing replica
I180817 22:57:26.778196 69962 storage/replica.go:883  [replicaGC,n2,s2,r147/1:/Table/78{-/1}] removed 7 (0+7) keys in 1ms [clear=0ms commit=0ms]
I180817 22:57:26.832292 70041 storage/replica_range_lease.go:554  [replicate,n2,s2,r150/1:/Table/78/{1/5-2}] transferring lease to s3
I180817 22:57:26.842666 69221 ccl/partitionccl/partition_test.go:200  query: SELECT count(*) FROM "single col range partitioning - MAXVALUE" WHERE a >= 4 AND a < 5
I180817 22:57:26.848200 70041 storage/replica_range_lease.go:617  [replicate,n2,s2,r150/1:/Table/78/{1/5-2}] done transferring lease to s3: <nil>
I180817 22:57:26.855437 43540 storage/replica_proposal.go:214  [n3,s3,r150/2:/Table/78/{1/5-2}] new range lease repl=(n3,s3):2 seq=7 start=1534546646.832466274,0 epo=1 pro=1534546646.832538297,0 following repl=(n2,s2):1 seq=6 start=1534546599.976797337,0 epo=1 pro=1534546599.976827457,0
I180817 22:57:26.944926 69887 storage/replica_command.go:792  [replicate,n3,s3,r5/3:/System/ts{d-e}] change replicas (REMOVE_REPLICA (n2,s2):2): read existing descriptor r5:/System/ts{d-e} [(n2,s2):2, (n3,s3):3, next=4, gen=842422941872]
I180817 22:57:26.960136 43642 server/status/recorder.go:652  [n3,summaries] available memory from cgroups (8.0 EiB) exceeds system memory 16 GiB, using system memory
I180817 22:57:26.988403 69221 ccl/partitionccl/partition_test.go:200  query: SELECT count(*) FROM "single col range partitioning - MAXVALUE" WHERE a < 4
I180817 22:57:27.006253 70084 storage/replica_command.go:792  [replicate,n1,s1,r107/2:/Table/7{4/2-5}] change replicas (REMOVE_REPLICA (n2,s2):1): read existing descriptor r107:/Table/7{4/2-5} [(n2,s2):1, (n1,s1):2, next=3, gen=842429954232]
I180817 22:57:27.074952 69221 ccl/partitionccl/partition_test.go:200  query: SELECT count(*) FROM "single col range partitioning - MAXVALUE" WHERE a >= 4 AND a < 5
I180817 22:57:27.109424 69887 storage/replica.go:3752  [n3,s3,r5/3:/System/ts{d-e}] proposing REMOVE_REPLICA((n2,s2):2): updated=[(n3,s3):3] next=4
I180817 22:57:27.131876 45785 storage/store.go:3663  [n2,s2,r5/2:/System/ts{d-e}] added to replica GC queue (peer suggestion)
I180817 22:57:27.178442 70030 storage/replica_command.go:792  [replicate,n3,s3,r150/2:/Table/78/{1/5-2}] change replicas (REMOVE_REPLICA (n2,s2):1): read existing descriptor r150:/Table/78/{1/5-2} [(n2,s2):1, (n3,s3):2, next=3, gen=842395429968]
I180817 22:57:27.225605 69221 ccl/partitionccl/partition_test.go:200  query: SELECT count(*) FROM "single col range partitioning - MAXVALUE" WHERE a > 5
I180817 22:57:27.286005 70077 storage/store.go:2563  [replicaGC,n2,s2,r5/2:/System/ts{d-e}] removing replica
I180817 22:57:27.288325 70077 storage/replica.go:883  [replicaGC,n2,s2,r5/2:/System/ts{d-e}] removed 470 (461+9) keys in 1ms [clear=1ms commit=0ms]
I180817 22:57:27.326263 70084 storage/replica.go:3752  [n1,s1,r107/2:/Table/7{4/2-5}] proposing REMOVE_REPLICA((n2,s2):1): updated=[(n1,s1):2] next=3
I180817 22:57:27.390625 43386 storage/store.go:3663  [n2,s2,r107/1:/Table/7{4/2-5}] added to replica GC queue (peer suggestion)
I180817 22:57:27.446045 70015 storage/store.go:2563  [replicaGC,n2,s2,r107/1:/Table/7{4/2-5}] removing replica
I180817 22:57:27.448359 70015 storage/replica.go:883  [replicaGC,n2,s2,r107/1:/Table/7{4/2-5}] removed 7 (0+7) keys in 1ms [clear=0ms commit=0ms]
I180817 22:57:27.507879 69221 ccl/partitionccl/partition_test.go:200  query: SELECT count(*) FROM "single col range partitioning - MAXVALUE" WHERE a < 4
I180817 22:57:27.516178 70030 storage/replica.go:3752  [n3,s3,r150/2:/Table/78/{1/5-2}] proposing REMOVE_REPLICA((n2,s2):1): updated=[(n3,s3):2] next=3
I180817 22:57:27.530572 45785 storage/store.go:3663  [n2,s2,r150/1:/Table/78/{1/5-2}] added to replica GC queue (peer suggestion)
I180817 22:57:27.551953 70173 storage/store.go:2563  [replicaGC,n2,s2,r150/1:/Table/78/{1/5-2}] removing replica
I180817 22:57:27.554379 70173 storage/replica.go:883  [replicaGC,n2,s2,r150/1:/Table/78/{1/5-2}] removed 7 (0+7) keys in 1ms [clear=0ms commit=0ms]
I180817 22:57:27.571765 69221 ccl/partitionccl/partition_test.go:200  query: SELECT count(*) FROM "single col range partitioning - MAXVALUE" WHERE a >= 4 AND a < 5
I180817 22:57:27.628909 69221 ccl/partitionccl/partition_test.go:200  query: SELECT count(*) FROM "single col range partitioning - MAXVALUE" WHERE a > 5
I180817 22:57:27.814275 43584 sql/lease.go:345  [n1,client=127.0.0.1:57826,user=root] publish: descID=78 (single col range partitioning) version=2 mtime=2018-08-17 22:57:27.730668583 +0000 UTC
I180817 22:57:28.131337 70218 sql/lease.go:877  new lease: 78("single col range partitioning") ver=2:1534547002.435536807,0, refcount=0
I180817 22:57:28.237130 43584 sql/lease.go:315  publish (1 leases): desc=[{single col range partitioning 78 1}]
I180817 22:57:28.514902 43584 sql/lease.go:345  [n1,client=127.0.0.1:57826,user=root,scExec=] publish: descID=78 (single col range partitioning) version=3 mtime=2018-08-17 22:57:28.506088922 +0000 UTC
I180817 22:57:28.695638 70363 sql/lease.go:877  new lease: 78("single col range partitioning") ver=3:1534546972.192442527,0, refcount=0
I180817 22:57:28.715544 43584 sql/lease.go:315  publish (1 leases): desc=[{single col range partitioning 78 2}]
I180817 22:57:28.922275 43584 sql/lease.go:315  publish (1 leases): desc=[{single col range partitioning 78 2}]
I180817 22:57:29.333797 43584 sql/lease.go:345  [n1,client=127.0.0.1:57826,user=root] publish: descID=78 (single col range partitioning) version=4 mtime=2018-08-17 22:57:29.118876332 +0000 UTC
I180817 22:57:29.611146 43584 sql/event_log.go:126  [n1,client=127.0.0.1:57826,user=root] Event: "alter_table", target: 78, info: {TableName:data.public."single col range partitioning" Statement:ALTER TABLE "single col range partitioning" PARTITION BY RANGE (a) (PARTITION p3 VALUES FROM (MINVALUE) TO (3), PARTITION p4 VALUES FROM (3) TO (4)) User:root MutationID:0 CascadeDroppedViews:[]}
I180817 22:57:29.820743 70382 sql/lease.go:877  new lease: 78("single col range partitioning") ver=4:1534546968.328141079,0, refcount=0
I180817 22:57:30.066658 43584 sql/lease.go:315  publish (1 leases): desc=[{single col range partitioning 78 3}]
I180817 22:57:30.633121 42336 server/status/runtime.go:433  [n1] runtime stats: 1.1 GiB RSS, 639 goroutines, 38 MiB/37 MiB/100 MiB GO alloc/idle/total, 97 MiB/141 MiB CGO alloc/total, 2937.68cgo/sec, 1.77/0.07 %(u/s)time, 0.02 %gc (14x)
I180817 22:57:31.038887 42338 server/status/recorder.go:652  [n1,summaries] available memory from cgroups (8.0 EiB) exceeds system memory 16 GiB, using system memory
I180817 22:57:31.457820 43584 sql/event_log.go:126  [n1,client=127.0.0.1:57826,user=root] Event: "set_zone_config", target: 78, info: {Target:data."single col range partitioning"@primary Config:constraints: [+n1] User:root}
I180817 22:57:31.470519 70867 storage/replica_command.go:282  [split,n1,s1,r148/2:/Table/78/1{-/4}] initiating a split of this range at key /Table/78/1/3 [r38]
I180817 22:57:31.485739 70852 storage/replica_raftstorage.go:529  [replicate,n2,s2,r149/1:/Table/78/1/{4-5}] generated preemptive snapshot be9a9bf1 at index 18
I180817 22:57:31.501134

Please assign, take a look and update the issue accordingly.

a-robinson commented 6 years ago

Dupe of https://github.com/cockroachdb/cockroach/issues/28786