Closed aaadipopt closed 3 years ago
System information
Geth version:
geth version
Version: 1.1.0-beta Git Commit: 032970bOS & Version: Windows/Linux/OSX "Ubuntu 20.04.2 LTS"
Commit hash : (if
develop
)Expected behaviour
Be able to catch up the network
Actual behaviour
Can't sync with the network
Steps to reproduce the behaviour
Create a 'c5a.8xlarge' vm with 7Tb io2 (32k iops) on aws and follow the steps from https://docs.binance.org/smart-chain/developer/fullnode.html
Backtrace
logs cached at a random time:
t=2021-06-24T13:00:32+0000 lvl=info msg="Imported new chain segment" blocks=1 txs=142 mgas=15.332 elapsed=29.928s mgasps=0.512 number=8,079,128 hash=0xa0644aaea9fb2dfd170a55ab43fb308e088c5427d6f426117f3f058e9d449d83 age=2w3d9h dirty="0.00 B" t=2021-06-24T13:01:02+0000 lvl=info msg="Imported new chain segment" blocks=1 txs=115 mgas=16.480 elapsed=30.427s mgasps=0.542 number=8,079,129 hash=0xbb4d36ca24d1023bf2759287f74809352a73cda56adac1eb0c6c1735266ef672 age=2w3d9h dirty="0.00 B" t=2021-06-24T13:01:33+0000 lvl=info msg="Imported new chain segment" blocks=1 txs=124 mgas=30.982 elapsed=30.310s mgasps=1.022 number=8,079,130 hash=0xb184bd01fa4fac9e1a29d893cc5b84c22a70e0d5556318199a5bc5054349b84f age=2w3d9h dirty="0.00 B" t=2021-06-24T13:02:03+0000 lvl=info msg="Imported new chain segment" blocks=1 txs=99 mgas=14.897 elapsed=30.042s mgasps=0.496 number=8,079,131 hash=0xfdfa6ac53bedcb7012fc1dd893d4e6c32aae794a415c458a9f9ca9150a9ca26d age=2w3d9h dirty="0.00 B" t=2021-06-24T13:02:33+0000 lvl=info msg="Imported new chain segment" blocks=1 txs=122 mgas=20.842 elapsed=30.101s mgasps=0.692 number=8,079,132 hash=0x470f429f9af2d0a59dfd3bda9e7bf51be1228efbe374f943a8edec1f99730f1a age=2w3d9h dirty="0.00 B" t=2021-06-24T13:03:03+0000 lvl=info msg="Imported new chain segment" blocks=1 txs=156 mgas=24.303 elapsed=29.971s mgasps=0.811 number=8,079,133 hash=0x2cde5260953a1cea7932590fe531faea45fe07247080d22d719c939cfe3bb139 age=2w3d9h dirty="0.00 B" t=2021-06-24T13:03:33+0000 lvl=info msg="Imported new chain segment" blocks=1 txs=145 mgas=18.900 elapsed=30.028s mgasps=0.629 number=8,079,134 hash=0xd17f503f7adddf7be72993c4cb3e38e1c2f829eb3ae61f741d1521371e7f6772 age=2w3d9h dirty="0.00 B"
logs cached right after restart:
t=2021-06-24T13:09:28+0000 lvl=info msg="Enabling recording of key preimages since archive mode is used" t=2021-06-24T13:09:28+0000 lvl=info msg="Set global gas cap" cap=25,000,000 t=2021-06-24T13:09:28+0000 lvl=info msg="Allocated trie memory caches" clean="1.20 GiB" dirty="0.00 B" t=2021-06-24T13:09:28+0000 lvl=info msg="Allocated cache and file handles" database=/mnt/bsc/full/geth/chaindata cache="2.00 GiB" handles=262,144 t=2021-06-24T13:09:54+0000 lvl=info msg="Opened ancient database" database=/mnt/bsc/full/geth/chaindata/ancient readonly=false t=2021-06-24T13:09:55+0000 lvl=info msg="Initialised chain configuration" config="{ChainID: 56 Homestead: 0 DAO: <nil> DAOSupport: false EIP150: 0 EIP155: 0 EIP158: 0 Byzantium: 0 Constantinople: 0 Petersburg: 0 Istanbul: 0, Muir Glacier: 0, Ramanujan: 0, Niels: 0, MirrorSync: 5184000, Berlin: <nil>, YOLO v3: <nil>, Engine: parlia}" t=2021-06-24T13:10:15+0000 lvl=info msg="Initialising Ethereum protocol" network=56 dbversion=8 t=2021-06-24T13:10:16+0000 lvl=info msg="Loaded most recent local header" number=8,079,145 hash=0x5212f46f19b85d36d3b2b470e526dab3f4d0ced53c24e37a38b4d6005b7da324 td=16,098,830 age=2w3d9h t=2021-06-24T13:10:16+0000 lvl=info msg="Loaded most recent local full block" number=8,079,145 hash=0x5212f46f19b85d36d3b2b470e526dab3f4d0ced53c24e37a38b4d6005b7da324 td=16,098,830 age=2w3d9h t=2021-06-24T13:10:16+0000 lvl=info msg="Loaded most recent local fast block" number=8,079,145 hash=0x5212f46f19b85d36d3b2b470e526dab3f4d0ced53c24e37a38b4d6005b7da324 td=16,098,830 age=2w3d9h t=2021-06-24T13:10:17+0000 lvl=warn msg="Loaded snapshot journal" diskroot=0xfe1d101a27eef199c36e4e177b8b70d0de103731242cc29aff2c548d1adc8c00 diffs=missing t=2021-06-24T13:10:17+0000 lvl=warn msg="Failed to load snapshot, regenerating" err="head doesn't match snapshot: have 0xfe1d101a27eef199c36e4e177b8b70d0de103731242cc29aff2c548d1adc8c00, want 0x475e0ba8ffdcc5dc24959005c5530752ad1e4448de73d12c9c46d7fda6e66a5a" t=2021-06-24T13:10:17+0000 lvl=info msg="Rebuilding state snapshot" t=2021-06-24T13:10:17+0000 lvl=info msg="Resuming state snapshot generation" root=0x475e0ba8ffdcc5dc24959005c5530752ad1e4448de73d12c9c46d7fda6e66a5a accounts=0 slots=0 storage="0.00 B" elapsed="316.119µs" t=2021-06-24T13:10:17+0000 lvl=warn msg="Unclean shutdown detected" booted=2021-06-21T10:56:03+0000 age=3d2h14m t=2021-06-24T13:10:17+0000 lvl=warn msg="Unclean shutdown detected" booted=2021-06-23T16:38:22+0000 age=20h31m55s t=2021-06-24T13:10:17+0000 lvl=warn msg="Unclean shutdown detected" booted=2021-06-23T18:55:54+0000 age=18h14m23s t=2021-06-24T13:10:17+0000 lvl=warn msg="Unclean shutdown detected" booted=2021-06-24T10:58:15+0000 age=2h12m2s t=2021-06-24T13:10:17+0000 lvl=info msg="Starting peer-to-peer node" instance=Geth/v1.1.0-beta-032970b2/linux-amd64/go1.15.5 t=2021-06-24T13:10:17+0000 lvl=info msg="New local node record" seq=44 id=65cb6bdc1c1c5cedb43c852ff481ba40960dde7d88a36ebd65e1c171ff2d76ac ip=127.0.0.1 udp=30311 tcp=30311 t=2021-06-24T13:10:17+0000 lvl=info msg="Started P2P networking" self=enode://7209d8b3d0b5fd133c8378033e081e5b57f88b6d7c305027322e65c966d5f4343883c3d905058e74cc9f43e98d1783cbc7bb065190ca6e50582ae991c520b0a2@127.0.0.1:30311 t=2021-06-24T13:10:17+0000 lvl=info msg="IPC endpoint opened" url=/mnt/bsc/full/geth.ipc t=2021-06-24T13:10:17+0000 lvl=info msg="HTTP server started" endpoint=127.0.0.1:8545 prefix= cors= vhosts=localhost t=2021-06-24T13:10:24+0000 lvl=info msg="New local node record" seq=45 id=65cb6bdc1c1c5cedb43c852ff481ba40960dde7d88a36ebd65e1c171ff2d76ac ip=18.191.178.93 udp=30311 tcp=30311 t=2021-06-24T13:10:25+0000 lvl=info msg="Generating state snapshot" root=0x475e0ba8ffdcc5dc24959005c5530752ad1e4448de73d12c9c46d7fda6e66a5a at=0x002ca0ca4ac07519d7ac45476d67700755342f3a02b8175cfbf7cf24de93820a accounts=36225 slots=148,113 storage="13.23 MiB" elapsed=8.010s eta=3h15m56.08s t=2021-06-24T13:10:27+0000 lvl=info msg="Block synchronisation started" t=2021-06-24T13:10:29+0000 lvl=info msg="Downloader queue stats" receiptTasks=0 blockTasks=509 itemSize="20.21 KiB" throttle=3244 t=2021-06-24T13:10:29+0000 lvl=info msg="Importing sidechain segment" start=8,079,146 end=8,079,150 t=2021-06-24T13:10:32+0000 lvl=info msg="Looking for peers" peercount=2 tried=127 static=23 t=2021-06-24T13:10:33+0000 lvl=info msg="Generating state snapshot" root=0x475e0ba8ffdcc5dc24959005c5530752ad1e4448de73d12c9c46d7fda6e66a5a at=0x005841551d3b7f89812d84f2da2da69fa4fc6536dafdeba7f18d97f343ff9ed7 accounts=71425 slots=251,472 storage="23.86 MiB" elapsed=16.035s eta=3h18m11.876s t=2021-06-24T13:10:41+0000 lvl=info msg="Generating state snapshot" root=0x475e0ba8ffdcc5dc24959005c5530752ad1e4448de73d12c9c46d7fda6e66a5a at=0x0082196f724460ba4ac0236fd551c3e30f488f32189279800d852b74dc4b6737 accounts=105,345 slots=270,772 storage="27.51 MiB" elapsed=24.045s eta=3h21m28.839s t=2021-06-24T13:10:41+0000 lvl=info msg="Imported new chain segment" blocks=2 txs=370 mgas=79.065 elapsed=12.162s mgasps=6.500 number=8,079,147 hash=0xa5bbb02933e036cc5284787eca53eecde196b5d7ca33cf5afecb0e4f03ed177d age=2w3d9h dirty="0.00 B" t=2021-06-24T13:10:49+0000 lvl=info msg="Generating state snapshot" root=0x475e0ba8ffdcc5dc24959005c5530752ad1e4448de73d12c9c46d7fda6e66a5a in=0x00a7e9e8993e467c5348c988df65cd614fd93b37c69cc4b6ff77ff837d812cab at=0xad3dcaa9c7793f2bd7c85cf3afe1f2ea50eb825ce31ef993a8f58caffcd5a7a7 accounts=136,273 slots=304,101 storage="32.03 MiB" elapsed=32.056s eta=3h27m59.652s t=2021-06-24T13:10:55+0000 lvl=info msg="Deep froze chain segment" blocks=4 elapsed=19.544ms number=7,989,148 hash=0x0d832d58bbf35ff814913620b32db8ca3deae8c636313ac461c7dbd4725d9451 t=2021-06-24T13:10:55+0000 lvl=info msg="Imported new chain segment" blocks=2 txs=429 mgas=67.341 elapsed=13.424s mgasps=5.016 number=8,079,149 hash=0x970f7947685fbb7ec38e03f945c79c822e57343372505fbd83c813d6c71a6b5c age=2w3d9h dirty="0.00 B" t=2021-06-24T13:10:57+0000 lvl=info msg="Generating state snapshot" root=0x475e0ba8ffdcc5dc24959005c5530752ad1e4448de73d12c9c46d7fda6e66a5a at=0x00c43910f4a6c55ab77aa2a9a5e4df0eaaf99da02641beae1fa11e548ed4b178 accounts=159,105 slots=407,659 storage="42.54 MiB" elapsed=40.085s eta=3h42m28.136s t=2021-06-24T13:11:00+0000 lvl=info msg="Imported new chain segment" blocks=1 txs=173 mgas=31.769 elapsed=4.808s mgasps=6.607 number=8,079,150 hash=0xdb5b3b2ee7546cafa0524e99549cfdf90f680f0b2834bdae5e50373e6d56db44 age=2w3d9h dirty="0.00 B" bsc-2 | CHANGED | rc=0 >> t=2021-06-24T13:00:30+0000 lvl=info msg="Aborting state snapshot generation" root=0x3b5b426e2fc287b9ca9d36fcce4469ed8ff6560bda7666c908c7d6f61b9adf00 at=0x324c36a6ccfbcb35dbec03ecd7fc7e8310aef66bbd40003c917bb1b0b88cf6fe accounts=10,437,962 slots=45,799,601 storage="4.06 GiB" elapsed=2h2m20.888s eta=8h20m21.997s t=2021-06-24T13:00:30+0000 lvl=info msg="Resuming state snapshot generation" root=0x28cce3fbb4d0df554f323c7da455123194469bbb8c7bf0035f5fa995a15fba86 at=0x324c36a6ccfbcb35dbec03ecd7fc7e8310aef66bbd40003c917bb1b0b88cf6fe accounts=10,437,962 slots=45,799,601 storage="4.06 GiB" elapsed=2h2m20.889s eta=8h20m22.001s t=2021-06-24T13:00:30+0000 lvl=info msg="Imported new chain segment" blocks=1 txs=119 mgas=16.982 elapsed=34.485s mgasps=0.492 number=8,052,259 hash=0x3119e6dda4f648eaf32e84bfd0bc4d128a121e6ebf1261000f03d7c8ef5cea91 age=2w4d7h dirty="0.00 B" t=2021-06-24T13:00:53+0000 lvl=info msg="Deep froze chain segment" blocks=3 elapsed=17.328ms number=7,962,259 hash=0x5bf744a20135b7dabc998452a2472d4af058bf838d88218f3451be19fe383de8 t=2021-06-24T13:01:04+0000 lvl=info msg="Aborting state snapshot generation" root=0x28cce3fbb4d0df554f323c7da455123194469bbb8c7bf0035f5fa995a15fba86 at=0x324c36a6ccfbcb35dbec03ecd7fc7e8310aef66bbd40003c917bb1b0b88cf6fe accounts=10,437,962 slots=45,799,601 storage="4.06 GiB" elapsed=2h2m55.223s eta=8h22m42.417s t=2021-06-24T13:01:04+0000 lvl=info msg="Resuming state snapshot generation" root=0x79bf0289fb42934432d351a0d5a08cb98494a1cad8efdcbbdc7a6b1300f30991 at=0x324c36a6ccfbcb35dbec03ecd7fc7e8310aef66bbd40003c917bb1b0b88cf6fe accounts=10,437,962 slots=45,799,601 storage="4.06 GiB" elapsed=2h2m55.225s eta=8h22m42.425s t=2021-06-24T13:01:04+0000 lvl=info msg="Imported new chain segment" blocks=1 txs=113 mgas=15.431 elapsed=34.269s mgasps=0.450 number=8,052,260 hash=0x9ac2e1b0a92880833af51f2fb74f10d2d859fc479417e7844fd1e507f67bb66c age=2w4d7h dirty="0.00 B" t=2021-06-24T13:01:38+0000 lvl=info msg="Aborting state snapshot generation" root=0x79bf0289fb42934432d351a0d5a08cb98494a1cad8efdcbbdc7a6b1300f30991 at=0x324c36a6ccfbcb35dbec03ecd7fc7e8310aef66bbd40003c917bb1b0b88cf6fe accounts=10,437,962 slots=45,799,601 storage="4.06 GiB" elapsed=2h3m29.516s eta=8h25m2.664s t=2021-06-24T13:01:38+0000 lvl=info msg="Resuming state snapshot generation" root=0x40819b3e7f41715f18aeb2044eafae6a2ee9f30f3ee04a82ef59e84ff4815049 at=0x324c36a6ccfbcb35dbec03ecd7fc7e8310aef66bbd40003c917bb1b0b88cf6fe accounts=10,437,962 slots=45,799,601 storage="4.06 GiB" elapsed=2h3m29.526s eta=8h25m2.705s t=2021-06-24T13:01:38+0000 lvl=info msg="Imported new chain segment" blocks=1 txs=103 mgas=13.353 elapsed=34.299s mgasps=0.389 number=8,052,261 hash=0x06564bf669f5a5350734f373ae79971c20bca40895d390d193f587c9cecdec0b age=2w4d7h dirty="0.00 B" t=2021-06-24T13:01:53+0000 lvl=info msg="Deep froze chain segment" blocks=2 elapsed=15.556ms number=7,962,261 hash=0x5542050a50aae5b5eb758c2d8fa3f7f777d5c67c42f4872bee9e820d246f9ee2
Those are the client logs. when I restart the service it runs a bit faster but not enough.
What we're doing wrong?
If you don’t need all the block data, I suggest you use the fast sync mode.I used the fast sync mode to sync all the data in 9 hours.
Same thing for me :/
config.toml
(only switched to fast sync mode)It took me about 12 hours to sync... To this you must add the download time of the last 453GB snapshot at a speed of 25mb/s (something like 5h), then add the unzip time (about 1h? I didn't measure) 👎🏼
@maxthedev How big is the full synced blockchain now? I tried doing full sync with snapshot, but unfrotunately it drained whole 4 TB SSD space I had. How many TBs does full instance take?
@maxthedev How big is the full synced blockchain now? I tried doing full sync with snapshot, but unfrotunately it drained whole 4 TB SSD space I had. How many TBs does full instance take?
I stopped the instance, so I can't answer you precisely but something between 500 & 600 Go or a little bit higher but definitely not 1Tb ;) Something went wrong, 4Tb is massive! What your config ?
@maxthedev How big is the full synced blockchain now? I tried doing full sync with snapshot, but unfrotunately it drained whole 4 TB SSD space I had. How many TBs does full instance take?
If you just need to monitor transactions and send transactions, I suggest you switch to fast synchronization mode. I only used a hard drive of less than 300GB in this mode
@maxthedev then you must have been running prunned node instead of archive one. Why did you need snapshopt in that case?
@ares0x I have tried that too after running out of the space in archiving mode, but node never finishes syncing with fast. I posted my issue here https://github.com/binance-chain/bsc/issues/283
@khelle
@maxthedev then you must have been running prunned node instead of archive one. Why did you need snapshopt in that case?
@maxthedev How big is the full synced blockchain now? I tried doing full sync with snapshot, but unfrotunately it drained whole 4 TB SSD space I had. How many TBs does full instance take?
@zhongfu we're running the node with SyncMode = "full" NoPruning = true
and, from logs, t=2021-06-27T10:12:53+0000 lvl=info msg="Enabling recording of key preimages since archive mode is used"
looks like the --gcmode=archive doesn't have any effect
After a while the node is syncing with blocks=1 txs=170 mgas=25.311 elapsed=34.373s
and after a restart blocks=7 txs=1186 mgas=170.096 elapsed=8.119s
so, like in the first post, is this a bug? because after a restart the node is syncing normal ( but still very very slow comparing with the network speed ) but after a while it slows down to 1blk/30s
@khelle @maxthedev at block no. 8,061,576 the node takes ~5.8Tb and the config file is: https://gist.github.com/aaadipopt/ad06b6225c3542714ae129924ff973e2
@ares0x is the debug_traceTransaction
method returning the same result with fast synchronization mode
?
@maxthedev from where can we get the latest snapshot to download? is there any one for full node
?
@aaadipopt
@zhongfu we're running the node with
SyncMode = "full" NoPruning = true
and, from logs,t=2021-06-27T10:12:53+0000 lvl=info msg="Enabling recording of key preimages since archive mode is used"
looks like the --gcmode=archive doesn't have any effect
Hm? Isn't that log message showing up precisely because gcmode=archive
is set?
(As a side note, it seems like NoPruning
relates to light nodes -- it probably shouldn't have any (significant, if at all) effect on a full node)
After a while the node is syncing with
blocks=1 txs=170 mgas=25.311 elapsed=34.373s
and after a restartblocks=7 txs=1186 mgas=170.096 elapsed=8.119s
so, like in the first post, is this a bug? because after a restart the node is syncing normal ( but still very very slow comparing with the network speed ) but after a while it slows down to 1blk/30s
No clue, honestly. I've never tried a full sync, because it'd take too long -- I'm currently only running non-archive nodes anyway
It should definitely be way faster than 20 mgas/s (mgasps
) though -- your node seems to be syncing at less than 1 mgas/s after running for a while. My nodes can do over 50 (and up to 250+) mgas/s consistently, and that's on hardware that costs less than $500!
You mention that you're running on AWS EBS storage -- I think that might be a bit of an issue, since latency is pretty mediocre even for io2 volumes:
Latency: single digit millisecond
Even SATA SSDs are an order of magnitude faster than that.
You may want to consider picking an instance type with locally-attached (but unfortunately ephemeral) NVMe storage, like i3en.3xlarge
(which comes with a single 7.5TiB SSD volume). And perhaps pair it with some slower and cheaper storage for chaindata backups, so you don't have to sync the entire chain from scratch if your instance powers down.
(Or find another host that can provide you with locally-attached storage at a lower price.)
@maxthedev from where can we get the latest snapshot to download? is there any one for
full node
?
Full, non-archive nodes -- yes, here
But you're running an archive node. I don't think there's any archive node chaindata snapshots lying around. You could find someone else that's also running an archive node and ask for their help, I guess?
@zhongfu thanks, but I don't think gcmode
have any effect because I didn't set it. And also on their docs they said that there are two sync modes: fast
and full
@zhongfu thanks, but I don't think
gcmode
have any effect because I didn't set it.
Hm, not sure about that then. Have you set gcmode=archive
at any point in the past? I'm not sure if it remembers the setting even if you set it back to gcmode=full
without clearing chaindata
I was just assuming that you were running an archive node, judging by the size of the storage you specced for your node. 7TBish is way more than required for a non-archive full node (for that, something like 1-2TB should be more enough).
On the other hand, an archive node probably requires more than 7TB of storage -- or if they don't, they probably will soon require more than that, so idk
And also on their docs they said that there are two sync modes:
fast
andfull
There's also another sync mode available newer geth clients -- snap
sync, which is kind of similar to how fast
sync works. But instead of downloading trie nodes one by one (with downloads batched into a few hundred per request), it downloads an (up-to-date) snapshot kept by geth clients that don't disable state snapshot generation
You probably won't be able to use fast
and snap
sync for an archive node though. But they'll work pretty well otherwise
(Also, sync mode != gcmode)
Maybe I enabled it in the past, i don't remember exactly, but
(Also, sync mode != gcmode)
You're sure that SyncMode = "full"
and NoPrunning = true
isn't eq. with --gcmode archive
?
Anyway, the issue here is that after a while the geth client is syncing with 1blk/30s. Any help on this? :)
Thanks
(Also, sync mode != gcmode)
You're sure that
SyncMode = "full"
andNoPrunning = true
isn't eq. with--gcmode archive
?
Ah yeah, you're right, --gcmode=archive
is indeed equivalent to NoPruning = true
:
if ctx.GlobalIsSet(GCModeFlag.Name) {
cfg.NoPruning = ctx.GlobalString(GCModeFlag.Name) == "archive"
}
But otherwise, sync mode is still != gcmode -- the former determines how the node syncs, the latter determines if the node will garbage collect old data, etc.
(I'm not sure if you can do a fast/snap sync (i.e. syncmode=fast or snap) with gcmode=archive -- if you've got archive node peers, it feels like it might just be possible? Unless geth decides to ignore one of those flags, or refuses to start up with such a configuration, of course.)
Anyway, the issue here is that after a while the geth client is syncing with 1blk/30s. Any help on this? :)
Possibly poor I/O performance (very high latency on EBS, even on io2) -- see here
thanks, but probably poor I/O performance will be constant from first block, not after few minutes
thanks, but probably poor I/O performance will be constant from first block, not after few minutes
eh, maybe. ¯_(ツ)_/¯ I'm not really going to bother figuring out why that's the case, but here are some ideas
also, a quick look around the issue tracker reveals that most (?) of the people who manage to catch up to chain head on AWS instances (especially within a reasonable amount of time) are using i3
and i3en
(etc) instances, which have ephemeral, directly-attached NVMe SSD volumes
of course, there are those who (apparently) have managed to sync on EBS volumes -- on gp3 even! (but did they sync from a chaindata snapshot, or was it from before BSC's block gas limit hit 60? I'm not too sure about that.)
but the numbers don't lie:
ok, I started a new node and it looks good, and it's syncing pretty fast. ( first 5M blks in one day ) with c5a.8xlarge
vm and gp3
volume w. 5000 iops and 125mb throughput :)
the initial node had the older geth version and after upgrade to 1.1.0-beta
the slowdown appeared.
as for the archive
mode, from here, looks like with NoPrunning = true
and SyncMode = "full"
the node will be in archive
mode and it required ~700Gb
and, if I restore the disk from a snapshot ( aws snapshot ) have the same awkward behaviour
t=2021-06-29T13:08:50+0000 lvl=info msg="Imported new chain segment" blocks=1 txs=68 mgas=27.689 elapsed=14.991s mgasps=1.847 number=5,032,030 hash=0x054b7a77761ac9df85ab13139caf7227d04a7a699df3a153d18d4209ed1ec0b9 age=4mo1w2d dirty="0.00 B"
t=2021-06-29T13:09:51+0000 lvl=info msg="Imported new chain segment" blocks=1 txs=91 mgas=13.575 elapsed=1m1.543s mgasps=0.221 number=5,032,031 hash=0xce1927fe272e0cdd16800b52f6f67a332ca5d7e9157222da2f0ab697fa4025b7 age=4mo1w2d dirty="0.00 B"
ok, I started a new node and it looks good, and it's syncing pretty fast. ( first 5M blks in one day ) with
c5a.8xlarge
vm andgp3
volume w. 5000 iops and 125mb throughput :)the initial node had the older geth version and after upgrade to
1.1.0-beta
the slowdown appeared.as for the
archive
mode, from here, looks like withNoPrunning = true
andSyncMode = "full"
the node will be inarchive
mode and it required ~700Gb
pretty sure a BSC archive node will take more than 5.5TiB now
Hi all. Thank you for your report. We have received many report of a sync issue. You can try the latest version. If you have any problems, please feedback #338. We will pay attention to the issue #338 for a long time, and if there are any updates, we will explain it on this issue.
Thanks.
System information
Geth version:
geth version
Version: 1.1.0-beta Git Commit: 032970b2deeffe3b7f87703753d7be9a09906758OS & Version: Windows/Linux/OSX "Ubuntu 20.04.2 LTS"
Commit hash : (if
develop
)Expected behaviour
Be able to catch up the network
Actual behaviour
Can't sync with the network
Steps to reproduce the behaviour
Create a 'c5a.8xlarge' vm with 7Tb io2 (32k iops) on aws and follow the steps from https://docs.binance.org/smart-chain/developer/fullnode.html
Backtrace
logs cached at a random time:
logs cached right after restart:
Those are the client logs. when I restart the service it runs a bit faster but not enough.
What we're doing wrong?