mimblewimble / grin

Minimal implementation of the Mimblewimble protocol.
https://grin.mw/
Apache License 2.0
5.04k stars 990 forks source link

HeaderSync Speedup #1372

Closed garyyu closed 6 years ago

garyyu commented 6 years ago

Currently I need 11 minutes to complete a full fast-sync on my mac air(Early 2015).

And reported by @antiochp :

~7mins for full fast sync

Reported by @hashmap :

10 mins for fast sync from scratch for me

I hope this fast sync on current ~50k height should be less than 5 minutes on a slow machine like mine, and less than 3 minutes on a fast new machine.

Giving a profile for my 11 mintues:

No. Time Percentage Tasks
1 6 m 54.5% HeaderSync
2 12 s 1.8% txhashset download (size example: 46M bytes)
3 1 m 12 s 10.9% copy and save file: txhashset.zip
4 5 s 0.7% unzip and check txhashset.zip
5 50 s 7.6% validate the output rproof kernel mmrs
6 14 s 2.1% validate_roots + verify_kernel_sums + verify_kernel_signatures
7 38 s 5.8% Range proof validation (batch)
8 6s 0.9% txhashset::extending()
9 1 m 30 s 13.6% BodySync
10 14s 2.1% others

Looks like the spent time in No.3 "copy & save file: txhashset.zip" is unreasonable, will check later.

And let's come to the biggest part: header sync!

Current implementation in header sync is using sequential design, each time the node only request one package (512 headers) from one peer:

sync: request_headers: asking x.x.x.x:13414 for headers

This request normally need average 3 seconds on my machine to receive the requested header package:

Aug 17 23:46:03.749 DEBG sync: request_headers: asking 51.x:13414 for headers, Aug 17 23:46:07.557 INFO Received block headers from 51.x:13414 Aug 17 23:46:08.813 DEBG sync: request_headers: asking 195.x:13414 for headers, Aug 17 23:46:11.804 INFO Received block headers from 195.x:13414

So, I propose to change it from sequential to parallel (at least requesting from 3 peers in the same time), this suppose to suppress the 6 minutes HeaderSync to 2 minutes.

Before trying this, do you think it's a good direction or not? or is there any constrain on that parallel ?

ignopeverell commented 6 years ago

There are constraints: headers are unknown beforehand. It's not like block downloads where we know which hashes we want and can ask them in parallel. So header download in inherently a lot more sequential as we have to discover in a given request what the next one will be.

There are still options. First it'd be interesting to see how much time of header sync is spent 1) processing, 2) downloading, 3) waiting for a response. There may be several possible optimizations here.

Another simple option would be to increase the batch size. For the sake of comparison, bitcoin downloads 2000 block headers at a time (500 for us). However bitcoin block headers are 80 bytes, while ours are closer to 300 bytes. That being said, each header batch is still only 150KB, which isn't that much on the modern internet. We could likely push to 2000 headers. But that wouldn't help much if that's not the bottleneck, so we should likely start with the first option before getting there.

ignopeverell commented 6 years ago

Just pushed a quick fix in 630d0a0 which I think should solve the long txhashset.zip save time. It's rather silly but the file write wasn't buffered.

garyyu commented 6 years ago

With that quick fix in https://github.com/mimblewimble/grin/commit/630d0a043864e687df1ced3ed69f60ba6053c79b, the step No.3 becomes 33 seconds (it was 1 m 12 s before), nice improvement~

But, even 33 seconds, it's still unreasonable for a task just copy and save file: txhashset.zip, will check it later.

Aug 19 21:23:24.866 DEBG sync_state: sync_status: HeaderSync  -> TxHashsetDownload
Aug 19 21:23:35.872 DEBG handle_payload: txhashset archive for 00447045 at 55917. size=68465379
Aug 19 21:24:09.256 DEBG sync_state: sync_status: TxHashsetDownload -> TxHashsetSetup

Will look into HeaderSync tomorrow.

garyyu commented 6 years ago

The rough profile for 6 minutes of HeaderSync:

No. Percentage Tasks
1.1 44% Headers Download
1.2 33% Headers processing. headers_received()
1.3 23% get_locator

For No.1.1 and No.1.2, I use a streaming mix processing (parallel) solution, and killed the No.1.2 time.

And for No.1.3, I have a question about get_locator() function:

@ignopeverell could you help?

Currently we're using chain.get_block_header(&header.previous) to get the block hash.

Why not use chain.get_hash_by_height()? I think which should be simple and faster. Is there any constraint at this syncing stage to use this API?

BTW: currently no this API? but I find it in the database.

HEADER_HEIGHT_PREFIX      (k,v) pair is exact (Height, Block Hash)
garyyu commented 6 years ago

Sorry for above stupid question, it's already good enough switching to chain.get_header_by_height(height).

garyyu commented 6 years ago

Haha~ Finally I suppress the HeaderSync time from 6 minutes to 3 minutes.

Aug 21 18:14:02.955 DEBG sync_state: sync_status: Initial -> HeaderSync { current_height: 0, highest_height: 61087 }

Aug 21 18:17:00.393 DEBG sync_state: sync_status: HeaderSync { current_height: 60710, highest_height: 61090 } -> TxHashsetDownload
ignopeverell commented 6 years ago

Unfortunately you can't use get_header_by_height because you can't rely on the height, which is why it's no saved in header sync pipeline processing. Here is the problematic scenario:

  1. You sync headers, going for a head at block X.
  2. You discover a fork at Y<X with head Z, but you're already past Y (so need to get back a bit).
  3. Download toward Z, from Y.

If for whatever reason you've been lied to about Z, or it's an incorrect fork with a bad header somewhere, you'll have overwritten a whole bunch of heights. So heights need to be written only when sync has confidence in that chain being fully valid, which is why saving heights is only done when saving the body.

Another optimization that wouldn't change any of those heuristics would be to reuse the locator. It goes exponentially far back, so the older blocks are both the most expensive to get but also the least likely to change from one locator request to another. So you could just redo the last few blocks in the locator, assuming no older block headers have been received.

garyyu commented 6 years ago

Let me take log as an example for locator heights. I don't understand how to reuse the locator here. Do you mean we can remove those locator heights which was used last time?

for example, for [1021, 1019, 1015, 1007, 991, 959, 895, 767, 511, 0], we can remove [511,0] because last time we already used them. Is that the idea?

Aug 22 17:30:34.181 DEBG sync: locator heights: [0]
Aug 22 17:30:35.908 DEBG sync: locator heights: [511, 509, 505, 497, 481, 449, 385, 257, 1, 0]
Aug 22 17:30:37.582 DEBG sync: locator heights: [1021, 1019, 1015, 1007, 991, 959, 895, 767, 511, 0]
Aug 22 17:30:39.047 DEBG sync: locator heights: [1529, 1527, 1523, 1515, 1499, 1467, 1403, 1275, 1019, 507, 0]
Aug 22 17:30:40.532 DEBG sync: locator heights: [2038, 2036, 2032, 2024, 2008, 1976, 1912, 1784, 1528, 1016, 0]
Aug 22 17:30:42.008 DEBG sync: locator heights: [2549, 2547, 2543, 2535, 2519, 2487, 2423, 2295, 2039, 1527, 503, 0]
Aug 22 17:30:43.605 DEBG sync: locator heights: [3060, 3058, 3054, 3046, 3030, 2998, 2934, 2806, 2550, 2038, 1014, 0]
Aug 22 17:30:44.966 DEBG sync: locator heights: [3569, 3567, 3563, 3555, 3539, 3507, 3443, 3315, 3059, 2547, 1523, 0]
Aug 22 17:30:46.516 DEBG sync: locator heights: [4078, 4076, 4072, 4064, 4048, 4016, 3952, 3824, 3568, 3056, 2032, 0]
Aug 22 17:30:48.391 DEBG sync: locator heights: [4589, 4587, 4583, 4575, 4559, 4527, 4463, 4335, 4079, 3567, 2543, 495, 0]
Aug 22 17:30:49.892 DEBG sync: locator heights: [5098, 5096, 5092, 5084, 5068, 5036, 4972, 4844, 4588, 4076, 3052, 1004, 0]
Aug 22 17:30:51.512 DEBG sync: locator heights: [5609, 5607, 5603, 5595, 5579, 5547, 5483, 5355, 5099, 4587, 3563, 1515, 0]
Aug 22 17:30:54.509 DEBG sync: locator heights: [6120, 6118, 6114, 6106, 6090, 6058, 5994, 5866, 5610, 5098, 4074, 2026, 0]
Aug 22 17:30:56.087 DEBG sync: locator heights: [6631, 6629, 6625, 6617, 6601, 6569, 6505, 6377, 6121, 5609, 4585, 2537, 0]
Aug 22 17:30:57.816 DEBG sync: locator heights: [7142, 7140, 7136, 7128, 7112, 7080, 7016, 6888, 6632, 6120, 5096, 3048, 0]
Aug 22 17:30:59.392 DEBG sync: locator heights: [7653, 7651, 7647, 7639, 7623, 7591, 7527, 7399, 7143, 6631, 5607, 3559, 0]
Aug 22 17:31:00.804 DEBG sync: locator heights: [8164, 8162, 8158, 8150, 8134, 8102, 8038, 7910, 7654, 7142, 6118, 4070, 0]
Aug 22 17:31:02.372 DEBG sync: locator heights: [8675, 8673, 8669, 8661, 8645, 8613, 8549, 8421, 8165, 7653, 6629, 4581, 485, 0]
Aug 22 17:31:03.965 DEBG sync: locator heights: [9184, 9182, 9178, 9170, 9154, 9122, 9058, 8930, 8674, 8162, 7138, 5090, 994, 0]
Aug 22 17:31:05.673 DEBG sync: locator heights: [9695, 9693, 9689, 9681, 9665, 9633, 9569, 9441, 9185, 8673, 7649, 5601, 1505, 0]
Aug 22 17:31:15.680 DEBG sync: locator heights: [9695, 9693, 9689, 9681, 9665, 9633, 9569, 9441, 9185, 8673, 7649, 5601, 1505, 0]
Aug 22 17:31:19.280 DEBG sync: locator heights: [10206, 10204, 10200, 10192, 10176, 10144, 10080, 9952, 9696, 9184, 8160, 6112, 2016, 0]
Aug 22 17:31:21.839 DEBG sync: locator heights: [10715, 10713, 10709, 10701, 10685, 10653, 10589, 10461, 10205, 9693, 8669, 6621, 2525, 0]
Aug 22 17:31:22.877 DEBG sync: locator heights: [11226, 11224, 11220, 11212, 11196, 11164, 11100, 10972, 10716, 10204, 9180, 7132, 3036, 0]
Aug 22 17:31:24.505 DEBG sync: locator heights: [11737, 11735, 11731, 11723, 11707, 11675, 11611, 11483, 11227, 10715, 9691, 7643, 3547, 0]
Aug 22 17:31:26.261 DEBG sync: locator heights: [12248, 12246, 12242, 12234, 12218, 12186, 12122, 11994, 11738, 11226, 10202, 8154, 4058, 0]
Aug 22 17:31:27.987 DEBG sync: locator heights: [12758, 12756, 12752, 12744, 12728, 12696, 12632, 12504, 12248, 11736, 10712, 8664, 4568, 0]
Aug 22 17:31:29.673 DEBG sync: locator heights: [13268, 13266, 13262, 13254, 13238, 13206, 13142, 13014, 12758, 12246, 11222, 9174, 5078, 0]
Aug 22 17:31:31.316 DEBG sync: locator heights: [13779, 13777, 13773, 13765, 13749, 13717, 13653, 13525, 13269, 12757, 11733, 9685, 5589, 0]
Aug 22 17:31:32.721 DEBG sync: locator heights: [14288, 14286, 14282, 14274, 14258, 14226, 14162, 14034, 13778, 13266, 12242, 10194, 6098, 0]
Aug 22 17:31:33.745 DEBG sync: locator heights: [14798, 14796, 14792, 14784, 14768, 14736, 14672, 14544, 14288, 13776, 12752, 10704, 6608, 0]
Aug 22 17:31:35.472 DEBG sync: locator heights: [15309, 15307, 15303, 15295, 15279, 15247, 15183, 15055, 14799, 14287, 13263, 11215, 7119, 0]
Aug 22 17:31:37.141 DEBG sync: locator heights: [15817, 15815, 15811, 15803, 15787, 15755, 15691, 15563, 15307, 14795, 13771, 11723, 7627, 0]
Aug 22 17:31:40.990 DEBG sync: locator heights: [16327, 16325, 16321, 16313, 16297, 16265, 16201, 16073, 15817, 15305, 14281, 12233, 8137, 0]
Aug 22 17:31:42.652 DEBG sync: locator heights: [16837, 16835, 16831, 16823, 16807, 16775, 16711, 16583, 16327, 15815, 14791, 12743, 8647, 455, 0]
Aug 22 17:31:44.299 DEBG sync: locator heights: [17345, 17343, 17339, 17331, 17315, 17283, 17219, 17091, 16835, 16323, 15299, 13251, 9155, 963, 0]
Aug 22 17:31:45.794 DEBG sync: locator heights: [17856, 17854, 17850, 17842, 17826, 17794, 17730, 17602, 17346, 16834, 15810, 13762, 9666, 1474, 0]
Aug 22 17:31:48.042 DEBG sync: locator heights: [18367, 18365, 18361, 18353, 18337, 18305, 18241, 18113, 17857, 17345, 16321, 14273, 10177, 1985, 0]
Aug 22 17:31:49.227 DEBG sync: locator heights: [18878, 18876, 18872, 18864, 18848, 18816, 18752, 18624, 18368, 17856, 16832, 14784, 10688, 2496, 0]
Aug 22 17:31:50.836 DEBG sync: locator heights: [19386, 19384, 19380, 19372, 19356, 19324, 19260, 19132, 18876, 18364, 17340, 15292, 11196, 3004, 0]
Aug 22 17:31:52.584 DEBG sync: locator heights: [19897, 19895, 19891, 19883, 19867, 19835, 19771, 19643, 19387, 18875, 17851, 15803, 11707, 3515, 0]
Aug 22 17:31:54.069 DEBG sync: locator heights: [20408, 20406, 20402, 20394, 20378, 20346, 20282, 20154, 19898, 19386, 18362, 16314, 12218, 4026, 0]
Aug 22 17:31:57.956 DEBG sync: locator heights: [20916, 20914, 20910, 20902, 20886, 20854, 20790, 20662, 20406, 19894, 18870, 16822, 12726, 4534, 0]
Aug 22 17:31:59.794 DEBG sync: locator heights: [21427, 21425, 21421, 21413, 21397, 21365, 21301, 21173, 20917, 20405, 19381, 17333, 13237, 5045, 0]
Aug 22 17:32:01.724 DEBG sync: locator heights: [21938, 21936, 21932, 21924, 21908, 21876, 21812, 21684, 21428, 20916, 19892, 17844, 13748, 5556, 0]
Aug 22 17:32:03.501 DEBG sync: locator heights: [22449, 22447, 22443, 22435, 22419, 22387, 22323, 22195, 21939, 21427, 20403, 18355, 14259, 6067, 0]
Aug 22 17:32:04.735 DEBG sync: locator heights: [22960, 22958, 22954, 22946, 22930, 22898, 22834, 22706, 22450, 21938, 20914, 18866, 14770, 6578, 0]
Aug 22 17:32:06.510 DEBG sync: locator heights: [23471, 23469, 23465, 23457, 23441, 23409, 23345, 23217, 22961, 22449, 21425, 19377, 15281, 7089, 0]
Aug 22 17:32:08.310 DEBG sync: locator heights: [23982, 23980, 23976, 23968, 23952, 23920, 23856, 23728, 23472, 22960, 21936, 19888, 15792, 7600, 0]
Aug 22 17:32:10.435 DEBG sync: locator heights: [24493, 24491, 24487, 24479, 24463, 24431, 24367, 24239, 23983, 23471, 22447, 20399, 16303, 8111, 0]
Aug 22 17:32:12.481 DEBG sync: locator heights: [25001, 24999, 24995, 24987, 24971, 24939, 24875, 24747, 24491, 23979, 22955, 20907, 16811, 8619, 0]
Aug 22 17:32:14.272 DEBG sync: locator heights: [25509, 25507, 25503, 25495, 25479, 25447, 25383, 25255, 24999, 24487, 23463, 21415, 17319, 9127, 0]
Aug 22 17:32:16.162 DEBG sync: locator heights: [26020, 26018, 26014, 26006, 25990, 25958, 25894, 25766, 25510, 24998, 23974, 21926, 17830, 9638, 0]
garyyu commented 6 years ago

How about let me save save_header_height in HeaderSync stage, so that I can use get_header_by_height() in this stage. Then after HeaderSync stage complete, I remove all those saved header_height from database.

Is this idea feasible? this should have no security impact, since it's the same result as before.

garyyu commented 6 years ago

After further reading of the code, I'm sure this should work :) will give the PR and request the code review.

antiochp commented 6 years ago

I'd be wary of saving anything to the header by height index before we sync the full block for that height. I think we should maintain the guarantee of "if it is in the db then it has been verified".

Can we build up a mapping of height->hash in memory for the duration of the header sync? What @ignopeverell said though - on receiving a msg containing 500 headers from a peer we need to be careful that these are actually on the correct header chain (and only update the height->hash index if these are "good" headers). We can have multiple headers for a given height (on various forks, valid or otherwise) and we would only want to consider the "most work" chain when building the locator (i.e. back from the head of the header chain).

Edit: Might be worth considering adding an additional "header hash by height" index in the db. And update this index only when we successfully update the header_head? This would effectively be an index that we would only use to build a locator quickly.

ignopeverell commented 6 years ago

What @antiochp said about not messing with heights, you're "this should work" ignores forks while syncing and the rest of the code that rely on the assumption.

What I meant is that, relatively to the head, the locator heights right now are always going to be:

[0, -2, -4, -8, -16, -32, -64, -128, -256, -512, -1024, -2048, ...]

The further back you go, the more expensive it gets. When we only add 500 headers, it doesn't really matter whether the locator has block at -2048 or -2548. So we could reuse that hash instead of going -2048 back to get a new hash. In that scheme, only the recent part of the locator (0 to -512) gets update.

garyyu commented 6 years ago

Thanks @ignopeverell , now I understand.

This must be the most simple way, even get_header_by_height is the fastest way. I like simple:) so let go with your method.

BTW, I did not plan to ignore forks, I was thinking to update Height -> Hash map each time when sync_head is updated (recursively by chain.get_block_header(&header.previous) from tip until Height -> Hash map is same as the longest header chain). But surely this need much more code than your simple way. For me, simple solution is better solution :)

garyyu commented 6 years ago

Code push here: https://github.com/mimblewimble/grin/pull/1411 Please give a code review, Thanks.

And test result: Note:

Aug 23 15:51:06.545 DEBG sync: locator heights : [0]
Aug 23 15:51:06.552 DEBG sync: locator heights': [0]
Aug 23 15:51:09.550 DEBG sync: locator heights : [511, 509, 505, 497, 481, 449, 385, 257, 1, 0]
Aug 23 15:51:09.551 DEBG sync: locator heights': [511, 509, 505, 497, 481, 449, 385, 257, 1, 0]
Aug 23 15:51:12.555 DEBG sync: locator heights : [1022, 1020, 1016, 1008, 992, 960, 896, 768, 512, 0]
Aug 23 15:51:12.556 DEBG sync: locator heights': [1022, 1020, 1016, 1008, 992, 960, 896, 768, 512, 1, 0]
Aug 23 15:51:15.562 DEBG sync: locator heights : [1533, 1531, 1527, 1519, 1503, 1471, 1407, 1279, 1023, 511, 0]
Aug 23 15:51:15.562 DEBG sync: locator heights': [1533, 1531, 1527, 1519, 1503, 1471, 1407, 1279, 1023, 512, 1, 0]
Aug 23 15:51:18.572 DEBG sync: locator heights : [2044, 2042, 2038, 2030, 2014, 1982, 1918, 1790, 1534, 1022, 0]
Aug 23 15:51:18.573 DEBG sync: locator heights': [2044, 2042, 2038, 2030, 2014, 1982, 1918, 1790, 1534, 1023, 512, 1, 0]
Aug 23 15:51:20.577 DEBG sync: locator heights : [2555, 2553, 2549, 2541, 2525, 2493, 2429, 2301, 2045, 1533, 509, 0]
Aug 23 15:51:20.577 DEBG sync: locator heights': [2555, 2553, 2549, 2541, 2525, 2493, 2429, 2301, 2045, 1534, 1023, 512, 1, 0]
Aug 23 15:51:21.583 DEBG sync: locator heights : [3066, 3064, 3060, 3052, 3036, 3004, 2940, 2812, 2556, 2044, 1020, 0]
Aug 23 15:51:21.583 DEBG sync: locator heights': [3066, 3064, 3060, 3052, 3036, 3004, 2940, 2812, 2556, 2045, 1534, 1023, 512, 1, 0]
Aug 23 15:51:24.590 DEBG sync: locator heights : [3577, 3575, 3571, 3563, 3547, 3515, 3451, 3323, 3067, 2555, 1531, 0]
Aug 23 15:51:24.591 DEBG sync: locator heights': [3577, 3575, 3571, 3563, 3547, 3515, 3451, 3323, 3067, 2556, 2045, 1534, 1023, 512, 1, 0]
Aug 23 15:51:30.597 DEBG sync: locator heights : [4088, 4086, 4082, 4074, 4058, 4026, 3962, 3834, 3578, 3066, 2042, 0]
Aug 23 15:51:30.597 DEBG sync: locator heights': [4088, 4086, 4082, 4074, 4058, 4026, 3962, 3834, 3578, 3067, 2556, 2045, 1534, 1023, 512, 1, 0]
Aug 23 15:51:33.607 DEBG sync: locator heights : [4599, 4597, 4593, 4585, 4569, 4537, 4473, 4345, 4089, 3577, 2553, 505, 0]
Aug 23 15:51:33.607 DEBG sync: locator heights': [4599, 4597, 4593, 4585, 4569, 4537, 4473, 4345, 4089, 3578, 3067, 2556, 2045, 1534, 1023, 512, 1, 0]
Aug 23 15:51:34.612 DEBG sync: locator heights : [5110, 5108, 5104, 5096, 5080, 5048, 4984, 4856, 4600, 4088, 3064, 1016, 0]
Aug 23 15:51:34.612 DEBG sync: locator heights': [5110, 5108, 5104, 5096, 5080, 5048, 4984, 4856, 4600, 4089, 3578, 3067, 2556, 2045, 1534, 1023, 512, 1, 0]
Aug 23 15:51:36.621 DEBG sync: locator heights : [5621, 5619, 5615, 5607, 5591, 5559, 5495, 5367, 5111, 4599, 3575, 1527, 0]
Aug 23 15:51:36.622 DEBG sync: locator heights': [5621, 5619, 5615, 5607, 5591, 5559, 5495, 5367, 5111, 4600, 4089, 3578, 3067, 2556, 2045, 1534, 1023, 512, 1]
Aug 23 15:51:39.628 DEBG sync: locator heights : [6132, 6130, 6126, 6118, 6102, 6070, 6006, 5878, 5622, 5110, 4086, 2038, 0]
Aug 23 15:51:39.629 DEBG sync: locator heights': [6132, 6130, 6126, 6118, 6102, 6070, 6006, 5878, 5622, 5111, 4600, 4089, 3578, 3067, 2556, 2045, 1534, 1023, 512]
Aug 23 15:51:42.636 DEBG sync: locator heights : [6643, 6641, 6637, 6629, 6613, 6581, 6517, 6389, 6133, 5621, 4597, 2549, 0]
Aug 23 15:51:42.636 DEBG sync: locator heights': [6643, 6641, 6637, 6629, 6613, 6581, 6517, 6389, 6133, 5622, 5111, 4600, 4089, 3578, 3067, 2556, 2045, 1534, 1023]
Aug 23 15:51:43.637 DEBG sync: locator heights : [7154, 7152, 7148, 7140, 7124, 7092, 7028, 6900, 6644, 6132, 5108, 3060, 0]
Aug 23 15:51:43.637 DEBG sync: locator heights': [7154, 7152, 7148, 7140, 7124, 7092, 7028, 6900, 6644, 6133, 5622, 5111, 4600, 4089, 3578, 3067, 2556, 2045, 1534]
Aug 23 15:51:45.644 DEBG sync: locator heights : [7665, 7663, 7659, 7651, 7635, 7603, 7539, 7411, 7155, 6643, 5619, 3571, 0]
Aug 23 15:51:45.645 DEBG sync: locator heights': [7665, 7663, 7659, 7651, 7635, 7603, 7539, 7411, 7155, 6644, 6133, 5622, 5111, 4600, 4089, 3578, 3067, 2556, 2045]
Aug 23 15:51:48.656 DEBG sync: locator heights : [8176, 8174, 8170, 8162, 8146, 8114, 8050, 7922, 7666, 7154, 6130, 4082, 0]
Aug 23 15:51:48.656 DEBG sync: locator heights': [8176, 8174, 8170, 8162, 8146, 8114, 8050, 7922, 7666, 7155, 6644, 6133, 5622, 5111, 4600, 4089, 3578, 3067, 2556]
Aug 23 15:51:50.663 DEBG sync: locator heights : [8687, 8685, 8681, 8673, 8657, 8625, 8561, 8433, 8177, 7665, 6641, 4593, 497, 0]
Aug 23 15:51:50.664 DEBG sync: locator heights': [8687, 8685, 8681, 8673, 8657, 8625, 8561, 8433, 8177, 7666, 7155, 6644, 6133, 5622, 5111, 4600, 4089, 3578, 3067]
Aug 23 15:51:53.672 DEBG sync: locator heights : [9198, 9196, 9192, 9184, 9168, 9136, 9072, 8944, 8688, 8176, 7152, 5104, 1008, 0]
Aug 23 15:51:53.673 DEBG sync: locator heights': [9198, 9196, 9192, 9184, 9168, 9136, 9072, 8944, 8688, 8177, 7666, 7155, 6644, 6133, 5622, 5111, 4600, 4089, 3578]
Aug 23 15:51:58.691 DEBG sync: locator heights : [9709, 9707, 9703, 9695, 9679, 9647, 9583, 9455, 9199, 8687, 7663, 5615, 1519, 0]
Aug 23 15:51:58.692 DEBG sync: locator heights': [9709, 9707, 9703, 9695, 9679, 9647, 9583, 9455, 9199, 8688, 8177, 7666, 7155, 6644, 6133, 5622, 5111, 4600, 4089]
Aug 23 15:52:01.700 DEBG sync: locator heights : [10220, 10218, 10214, 10206, 10190, 10158, 10094, 9966, 9710, 9198, 8174, 6126, 2030, 0]
Aug 23 15:52:01.701 DEBG sync: locator heights': [10220, 10218, 10214, 10206, 10190, 10158, 10094, 9966, 9710, 9199, 8688, 8177, 7666, 7155, 6644, 6133, 5622, 5111, 4600]
Aug 23 15:52:04.713 DEBG sync: locator heights : [10731, 10729, 10725, 10717, 10701, 10669, 10605, 10477, 10221, 9709, 8685, 6637, 2541, 0]
Aug 23 15:52:04.713 DEBG sync: locator heights': [10731, 10729, 10725, 10717, 10701, 10669, 10605, 10477, 10221, 9710, 9199, 8688, 8177, 7666, 7155, 6644, 6133, 5622, 5111]
Aug 23 15:52:07.726 DEBG sync: locator heights : [11242, 11240, 11236, 11228, 11212, 11180, 11116, 10988, 10732, 10220, 9196, 7148, 3052, 0]
Aug 23 15:52:07.727 DEBG sync: locator heights': [11242, 11240, 11236, 11228, 11212, 11180, 11116, 10988, 10732, 10221, 9710, 9199, 8688, 8177, 7666, 7155, 6644, 6133, 5622]
Aug 23 15:52:09.732 DEBG sync: locator heights : [11753, 11751, 11747, 11739, 11723, 11691, 11627, 11499, 11243, 10731, 9707, 7659, 3563, 0]
Aug 23 15:52:09.733 DEBG sync: locator heights': [11753, 11751, 11747, 11739, 11723, 11691, 11627, 11499, 11243, 10732, 10221, 9710, 9199, 8688, 8177, 7666, 7155, 6644, 6133]
Aug 23 15:52:11.742 DEBG sync: locator heights : [12264, 12262, 12258, 12250, 12234, 12202, 12138, 12010, 11754, 11242, 10218, 8170, 4074, 0]
Aug 23 15:52:11.742 DEBG sync: locator heights': [12264, 12262, 12258, 12250, 12234, 12202, 12138, 12010, 11754, 11243, 10732, 10221, 9710, 9199, 8688, 8177, 7666, 7155, 6644]
Aug 23 15:52:15.751 DEBG sync: locator heights : [12775, 12773, 12769, 12761, 12745, 12713, 12649, 12521, 12265, 11753, 10729, 8681, 4585, 0]
Aug 23 15:52:15.751 DEBG sync: locator heights': [12775, 12773, 12769, 12761, 12745, 12713, 12649, 12521, 12265, 11754, 11243, 10732, 10221, 9710, 9199, 8688, 8177, 7666, 7155]
...
garyyu commented 6 years ago

after #1400 and #1411 merged, I test with the latest master code:

Aug 31 09:49:51.972 DEBG sync_state: sync_status: Initial -> HeaderSync { current_height: 0, highest_height: 0 }
...
Aug 31 09:52:51.107 DEBG sync_state: sync_status: HeaderSync { current_height: 74497, highest_height: 74777 } -> TxHashsetDownload

The HeaderSync now is exactly 3 minutes for the current height 74777, as the expected optimization objective.

So I can happily close this research now:) And thanks @ignopeverell and @antiochp for the help on this.

sesam commented 6 years ago

Great job, devs! :)