Open jesobreira opened 5 years ago
巧了,我也是,我重启了好多次,然后一直卡在同步snapshot上了: Coincidentally, me too. I restarted many times, and then stuck in synchronous snapshot all the time:
I have same problem. Same parity version, at same block number. Should I update parity to beta?
@qiluge you have a different problem than what is described in this issue, you may be able to get past by restarting with different values of --warp-barrier
, or otherwise sync with --no-warp
@joshua-mir Ok, thank you, my node is normal now. However, the sync speed is too slow....
Sorry for the many changes of labels here 😅
@jesobreira comparing the blockhashes you are seeing for 6580001 and 6580000, they don't match the canonical ones I'm seeing on chain.
How do you sync to 6580000? If you warp sync and "drop out" at that blocknumber, that means there's a bad warp snapshot in the network and you may want to set up a --warp-barrier
above 6580000 to get past it - but honestly the only way to reliably sync these networks after an evidently dirty fork is to use --no-warp
. There would be a more serious problem if you were not warping.
@joshua-mir Thank you for your answer. So, what I have to do is that restart parity with " --warp-barrier 6580000"? After that, does parity start syncing without warp?
@rlaace423 if you want to sync without warp, you can use --no-warp
. --warp-barrier 6580000
just means "find a warp snapshot above block 6580000".
If you are warping on ropsten it is important to verify you are on the right chain by checking your blockhash.
Hi, @joshua-mir ! SFLR
I tried both with and without --no-warp
, but I didn't try setting --warp-barrier
yet. I'm going to try it out and will be back with info about how it has been.
Thank you for your attention!
Anecdotally I've seen plenty of bad snapshots lately around the 6 500 000 mark. I'd try higher up if I were you, 6 600 000 maybe. :/
Hi Everyone,
Curious if anyone has found a reliable way to get their ropsten nodes fully synced and matching etherscan. I can't seem to get beyond 6700001
, I've run with the suggested --warp-barrier
but even with --warp-barrier 6185846
it always gets stuck.
Would love to hear if anyone has found a solution.
Cheers!
Parity Ethereum version: 2.5.9 Operating system: Ubuntu 18.04 Installation: downloaded binary from Releases and moved to /usr/local/bin Fully synchronized: no Network: ropsten Restarted: yes
Update:
I was able to resolve the problem by moving my warp barrier further back.
By running it with --warp-barrier 6100000
I've had no problems since. Hope this helps others
From my side, I had no luck with --warp-barrier 6100000
.
I've also tried --warp-barrier 6600000
and --warp-barrier 6700000
but now I'm stuck at 6700001.
So I did the obvious and set the barrier to 6700002. And, well, my pain continues. It looks like it's doing nothing now.
I ended up using pre-synced nodes from Quiknode (cheaper than Infura) but as my country's currency dropped more than half, Quiknode has became extremely expensive (actually it became alone more expensive than all our servers stack).
For this reason, I tried once again setting up a Parity server in a completely different server, running nothing more than just the OS (Ubuntu), formatted and prepared exclusively to run Parity and nothing more. As this is going to be part of our production structure, this time I didn't attempt to sync Ropsten but the mainnet instead.
But still no luck.
mai 11 09:00:20 ethnode parity[11380]: 2020-05-11 09:00:20 Syncing snapshot 272/4772 #9795000 20/25 peers 90 KiB chain 136 bytes db 0 bytes queue 32 KiB sync RPC: 0 conn, 0 req/s, 0 µs
mai 11 09:00:25 ethnode parity[11380]: 2020-05-11 09:00:25 Syncing snapshot 272/4772 #9795000 20/25 peers 90 KiB chain 136 bytes db 0 bytes queue 32 KiB sync RPC: 0 conn, 0 req/s, 0 µs
mai 11 09:00:30 ethnode parity[11380]: 2020-05-11 09:00:30 Syncing snapshot 272/4772 #9795000 20/25 peers 90 KiB chain 136 bytes db 0 bytes queue 32 KiB sync RPC: 0 conn, 0 req/s, 0 µs
mai 11 09:00:35 ethnode parity[11380]: 2020-05-11 09:00:35 Syncing snapshot 272/4772 #9795000 20/25 peers 90 KiB chain 136 bytes db 0 bytes queue 32 KiB sync RPC: 0 conn, 0 req/s, 0 µs
mai 11 09:00:40 ethnode parity[11380]: 2020-05-11 09:00:40 Syncing snapshot 272/4772 #9795000 20/25 peers 90 KiB chain 136 bytes db 0 bytes queue 32 KiB sync RPC: 0 conn, 0 req/s, 0 µs
mai 11 09:00:45 ethnode parity[11380]: 2020-05-11 09:00:45 Syncing snapshot 272/4772 #9795000 20/25 peers 90 KiB chain 136 bytes db 0 bytes queue 32 KiB sync RPC: 0 conn, 0 req/s, 0 µs
mai 11 09:00:50 ethnode parity[11380]: 2020-05-11 09:00:50 Syncing snapshot 272/4772 #9795000 20/25 peers 90 KiB chain 136 bytes db 0 bytes queue 32 KiB sync RPC: 0 conn, 0 req/s, 0 µs
mai 11 09:00:55 ethnode parity[11380]: 2020-05-11 09:00:55 Syncing snapshot 272/4772 #9795000 20/25 peers 90 KiB chain 136 bytes db 0 bytes queue 32 KiB sync RPC: 0 conn, 0 req/s, 0 µs
Binary: v2.7.2-stable
@jesobreira - Can you try syncing again with today's release? If that's not working, try syncing with master. We haven't been able to reproduce this issue on our end.
@jesobreira Maybe you know this already but in case you don't: when you warp sync a node you're downloading a snapshot of 100 blocks + all the state for those blocks. The amount of data is large (~20Gb) so it is chopped up into smaller pieces, in your case 4772 pieces and downloaded and imported in sequence (yes, it is single threaded and for good reason unfortunately). If the peer you are downloading from goes offline you are unfortunately not able to pick up from another peer (as their snapshot might be form a slightly different set of blocks and state). I suspect this is what happened to you. The only thing you can do is to try again and hope that you'll find a node with available slots. When I did this last week I had to try 4-5 times until I found a node that I could sync from. Sometimes you get unlucky and get an old snapshot which means that once you're done there's still a fair amount of syncing to do, but usually it's ok-ish, maybe a day or so.
I'm unable to sync on two machines (a Mac Mini running macOS 10.14.6 and a PC running Ubuntu 18.04) to the ropsten network. It always gets stuck at the block 6580000 (both the machines). I've tried resetting the db multiple times and it tries syncing from the beginning again, but gets stuck at the same block again.