Closed WhoSoup closed 3 years ago
Several attempts to restart syncing, on all branches (develop, FD-1225, and dev_wax_merge), resulted in the inability to sync (never even got to block 1). It seems there was a simultaneous testnet stall: (https://discordapp.com/channels/419201548372017163/427786712974884874/755741270328082444)
I might have to roll my own local network to be able to debug this better
Thought I'd try again after the testnet restart with a dev_wax_merge
build but it also stalled out:
Once again, the p2p queue was overflowing with DataResponse messages for entries around the height it was trying to sync from a multitude of different peers. I checked out the control panels of some of those nodes and they are fully synced, so they would never send those data requests to us in normal conditions.
This leads me to believe that wax somehow interprets its own requests for messages as requests coming in from other peers, meaning there could be a crossover loop somewhere that inappropriately routes outgoing messages back into the node.
I found one possible source of this error: p2pproxy is being added to fnode0 twice:
and
This could be the reason for weird crossover behavior but I'll keep looking while I test this.
The biggest problem I have so far is that I can't reliably test things, as the error does not happen consistently. I created a LOCAL network over the weekend with ~27k blocks, and the error does occur when syncing with that node. It still takes quite a while to manifest itself but it's in the order of an hour, not a day as on the testnet.
I believe I found the cause of the errors: queue sizes and slow processing causing a bottleneck which resulted in the network being unable to deliver inbound messages.
In the base version, the flow is something like this: network (5,000) => inmsgqueue (10,000) => msgqueue (50) => execution. In Wax, it changed to: network (1,000) => BMV in (100) => BMV out (100) => inmsgqueue (10,000) => msgqueue (50) => execution
Entry sync requested tried to request a total of 1,000 entries at a time, which meant it was possible for all requests to come in faster than it was being cleared by the BMV. In https://github.com/WhoSoup/factomd/pull/1, I adjusted the sizes of the queues. Network from 1000 to 2000 and BMV in/out from 100 to 1000. Additionally, I re-integrated the separate data queue for data requests and introduced a similar queue for data responses, making the flow:
Network (2,000) => BMV in (1,000) => BMV out (1,000) => Data Queue (10,000) => execution.
The fix proposed above seems to work mostly well but it still encountered an overflow around height 126487. Once again, the queue was being overloaded incoming messages but this time, the cause is what prompted the rework in the first place: the entrysync routine just sends too many requests at a time. The way it works is that it has an internal counter for how many entries it should request at once, 750, but it doesn't split up a dblock if it has more than that.
The height 126487 isn't that arbitrary for this reason, as that is during a heavy load test. Before block 126480, the load test was around 15 EPS (~9,000 entries per block), which fit just inside the buffer of 10,000. At block 126481 and on, the EPS went to 20 (~12,000 entries per block), which the entry sync routine would try to fetch simultaneously.
This is backed up by a small debug print that printed out the number of messages it tried to send per second:
rate: 4 messages over 51.287739137s = 0.07799134967122515 mps
rate: 541 messages over 1.098852058s = 492.331952589434 mps
rate: 918 messages over 1.000021589s = 917.9801422535628 mps
rate: 3506 messages over 1.019954399s = 3437.4084361675914 mps
rate: 2127 messages over 1.049222221s = 2027.2158464224126 mps
rate: 2551 messages over 1.01068979s = 2524.0186520248944 mps
rate: 2556 messages over 1.018370165s = 2509.8927530878686 mps
rate: 2624 messages over 1.053598198s = 2490.5128780937625 mps
rate: 2732 messages over 1.007259628s = 2712.3095305824263 mps
rate: 1463 messages over 1.002419032s = 1459.4694325230498 mps
rate: 1459 messages over 1.041508923s = 1400.8520813844864 mps
rate: 643 messages over 1.00211344s = 641.6438965326245 mps
rate: 109 messages over 1.000436081s = 108.95248331611872 mps
rate: 333 messages over 1.008625851s = 330.1521429412679 mps
rate: 159 messages over 1.005653642s = 158.10611749887616 mps
(Note: this was a develop
build using P2P1)
This is a problem addressed in https://github.com/FactomProject/factomd/pull/1044.
It seems there are two problems at play here: 1) the BMV bottleneck and 2) uncapped entrysync for blocks with over a thousand entries. That has made the entire debugging process extraordinarily painful.
I'm going to try and sync from scratch once again using a build that addresses both of these.
This has been fixed in #1056.
This has taken me a while to confirm due to syncing being so incredibly slow but it appears that nodes will not fully sync using the
FD-1225_wax_rollup
branch. This is the third-ish time I've tried, the first time I assumed it was a fluke, the second time I thought it was related to my EntrySync rework, but now it's happened again in the rollup branch.The node reaches a certain height (first pass was at 99%: 81861 / 82065, exact height varies) and then seems to stall with the console spammed full of p2p overflow messages:
Those are "DataResponse" messages, trying to respond to requests from other nodes for entries from height 81862: https://testnet.factoid.org/entry?hash=4ef1232d5a126e8f5d85d779b06cbe5bc669681fe2b8a862871878a615bda0ad https://testnet.factoid.org/entry?hash=630edbc88650f565993e230fefda944f0e54c31b6ab347f3bee83325cccb08e7 https://testnet.factoid.org/entry?hash=a9dcd90989521dcbad8b90bbc7f4be91697de6907a49e4f65e3912349c4e37ec
I'm not sure why it's trying to spam send those replies, particularly for the exact height it seems to want to sync itself, but it seems suspect.
After restarting the node, it is now at 100% (82065/82065) and will not sync any further.