bnb-chain / bsc

A BNB Smart Chain client based on the go-ethereum fork
GNU Lesser General Public License v3.0
2.73k stars 1.56k forks source link

BSC synchronization issues #338

Closed j75689 closed 2 years ago

j75689 commented 3 years ago

Description

In the 24 hours of July 28, Binance Smart Chain (BSC) processed 12.9 million transactions. This number and the below numbers are all from the great BSC network explorer bscscan.com powered by the Etherscan team.

This means 150 transactions per second (TPS) processed on the mainnet, not in isolated environment tests or white paper. If we zoom in, we will also notice that these were not light transactions as BNB or BEP20 transfers, but heavy transactions, as many users were "fighting" each other in the “Play and Earn”, which is majorly contributed by GameFi dApps from MVBII.

The total gas used on July 28 was 2,052,084 million. If all these were for a simple BEP20 transaction that typically cost 50k gas, it could cover 41 millions transactions, and stand for 470 TPS.

On the other hand, with the flood of volume, the network experienced congestion on July 28 for about 4 hours, and many low spec or old version nodes could not catch up with processing blocks in time.

Updates

A new version of beta client is released which has better performance in order to handle the high volume. Please feel free to upgrade and raise bug reports if you encounter any. Please note this is just a beta version, some known bug fix is on the way. Click here to download the beta client.

To improve the performance of nodes and achieve faster block times, we recommend the following specifications.

If you don’t need an archive node, choose the latest snapshot and rerun from scratch from there.

Problems

Suggestions

Reference PRs


We will update this board, If there are any updates. If you have a suggestion or want to propose some improvements, please visit our Github. If you encounter any synchronization issues, please report them here.

kgcdream2019 commented 3 years ago

I updated to latest binary 1.1.1-beta. my hardware CPU 36, RAM 72, GP3 10000 IOPS, 1000 MB/S now block is behind 1 days 23 hours. block generation 2~3 / 10 seconds. what is issue? this is my geth command ./build/bin/geth --config ./config.toml --datadir ./node --gcmode archive --syncmode=full --snapshot=false --http.vhosts=* --cache=18000 --cache.preimages --rpc.allow-unprotected-txs --txlookuplimit 0 console

jun0tpyrc commented 3 years ago

I updated to latest binary 1.1.1-beta. my hardware CPU 36, RAM 72, GP3 10000 IOPS, 1000 MB/S now block is behind 1 days 23 hours. block generation 2~3 / 10 seconds. what is issue? this is my geth command ./build/bin/geth --config ./config.toml --datadir ./node --gcmode archive --syncmode=full --snapshot=false --http.vhosts=* --cache=18000 --cache.preimages --rpc.allow-unprotected-txs --txlookuplimit 0 console

my experience with a few archive nodes

once cold start running for some time, it stables down at rate of 
nvme raid0 [3-4 blocks 8 - 12s] <<< catch up very very slowly
gp3 raid0 aws gp3 1000MB/s 16000iops [2-3 blocks 9 - 13s] <<< shall not catch up in recent blocks

after 1.1.1-beta, it seems a bit better , i am seeing 5-6 blocks in ~9s, which allows node catching up a bit faster

j75689 commented 3 years ago

Hi @kgcdream2019, Could you please provide some logs and Disk's mertics data? Maybe we can see some clues from it. In addition, have you tried pruning data? Pruning some data to reduce disk usage may help.

smileuwu97 commented 3 years ago

how to check if a peer is slow? maybe i can drop slow peers manually and only keep fast peers.

kgcdream2019 commented 3 years ago

Hi @kgcdream2019, Could you please provide some logs and Disk's mertics data? Maybe we can see some clues from it. In addition, have you tried pruning data? Pruning some data to reduce disk usage may help.

t=2021-07-30T08:57:26+0000 lvl=info msg="Imported new chain segment" blocks=2 txs=1046 mgas=169.139 elapsed=10.183s mgasps=16.609 number=9,542,712 hash=0xd7c612264bc67ab77fc777989917b7c4740905dd161bf77599327346d35e3980 age=2d1h5m dirty="6.80 MiB" t=2021-07-30T08:57:29+0000 lvl=info msg="Deep froze chain segment" blocks=12 elapsed=36.281ms number=9,452,712 hash=0xfb936c0145c7fceed74dc1339be074b8531343cd06cbd17c0088c62989ef7a93 t=2021-07-30T08:57:36+0000 lvl=info msg="Imported new chain segment" blocks=2 txs=896 mgas=135.918 elapsed=9.991s mgasps=13.603 number=9,542,714 hash=0x7adfb39801cc4589469a7e033232eb3ca7c64c40414964006a06c65a6eb85fe8 age=2d1h5m dirty="4.82 MiB" t=2021-07-30T08:57:47+0000 lvl=info msg="Imported new chain segment" blocks=2 txs=1076 mgas=169.173 elapsed=11.105s mgasps=15.234 number=9,542,716 hash=0x5c4ce24d72a1126fa586a568638c76f931054de212f396b8bc2e1471c5597546 age=2d1h5m dirty="7.42 MiB"

kgcdream2019 commented 3 years ago

this is logs from my archive node. 2~3 blocks / 12 seconds

my disk is gp3 10000 IOPS 1000 MB/S

I upgraded from gp2, because syncing speed is very slow

I checked disk speed using following commands:

root@ip-172-31-44-89:~# dd if=/dev/zero of=test bs=64k count=16k conv=fdatasync

and got about 311MB/s while running geth : 16384+0 records in 16384+0 records out 1073741824 bytes (1.1 GB, 1.0 GiB) copied, 3.45406 s, 311 MB/s

I changed maxpeerCount in config.toml to 200 from 30, but there is no effect.

kgcdream2019 commented 3 years ago

I updated to latest binary 1.1.1-beta. my hardware CPU 36, RAM 72, GP3 10000 IOPS, 1000 MB/S now block is behind 1 days 23 hours. block generation 2~3 / 10 seconds. what is issue? this is my geth command ./build/bin/geth --config ./config.toml --datadir ./node --gcmode archive --syncmode=full --snapshot=false --http.vhosts=* --cache=18000 --cache.preimages --rpc.allow-unprotected-txs --txlookuplimit 0 console

my experience with a few archive nodes

once cold start running for some time, it stables down at rate of 
nvme raid0 [3-4 blocks 8 - 12s] <<< catch up very very slowly
gp3 raid0 aws gp3 1000MB/s 16000iops [2-3 blocks 9 - 13s] <<< shall not catch up in recent blocks

after 1.1.1-beta, it seems a bit better , i am seeing 5-6 blocks in ~9s, which allows node catching up a bit faster

I am using gp3 aws 1000MB/s, 10000 iops , 1.1.1-beta, but still block generation speed is [2-3 blocks 9 - 13s] <<< very slow

kgcdream2019 commented 3 years ago

Hi @kgcdream2019, Could you please provide some logs and Disk's mertics data? Maybe we can see some clues from it. In addition, have you tried pruning data? Pruning some data to reduce disk usage may help.

Hi, @j75689 . my node is archive node, so I can't prune some data.

jun0tpyrc commented 3 years ago

I updated to latest binary 1.1.1-beta. my hardware CPU 36, RAM 72, GP3 10000 IOPS, 1000 MB/S now block is behind 1 days 23 hours. block generation 2~3 / 10 seconds. what is issue? this is my geth command ./build/bin/geth --config ./config.toml --datadir ./node --gcmode archive --syncmode=full --snapshot=false --http.vhosts=* --cache=18000 --cache.preimages --rpc.allow-unprotected-txs --txlookuplimit 0 console

my experience with a few archive nodes

once cold start running for some time, it stables down at rate of 
nvme raid0 [3-4 blocks 8 - 12s] <<< catch up very very slowly
gp3 raid0 aws gp3 1000MB/s 16000iops [2-3 blocks 9 - 13s] <<< shall not catch up in recent blocks

after 1.1.1-beta, it seems a bit better , i am seeing 5-6 blocks in ~9s, which allows node catching up a bit faster

I am using gp3 aws 1000MB/s, 10000 iops , 1.1.1-beta, but still block generation speed is [2-3 blocks 9 - 13s] <<< very slow

5-6 blocks in ~9s <= that improvement is seen consistently on nvme raid0 , gp3 may also improve a bit 3-4 blocks in ~9s, but I feel like it will never go to catch up , just not to fall behind a lot more

apogiatzis commented 3 years ago

Anyone tried syncing with i3.x2large aws instance. They have local NVMe stores.

jun0tpyrc commented 3 years ago

Anyone tried syncing with i3.x2large aws instance. They have local NVMe stores.

It's good , we have multiple full nodes running on it without txlimit pruning

kgcdream2019 commented 3 years ago

5-6 blocks in ~9s <= that improvement is seen consistently on nvme raid0 ,

How to create nvme raid0? can I upgrade gp3 to nvme raid0 without lose of blockchain data?

apogiatzis commented 3 years ago

Anyone tried syncing with i3.x2large aws instance. They have local NVMe stores.

It's good , we have multiple full nodes running on it without txlimit pruning

That's great to know! Thanks! Do you mind sharing your config??

jun0tpyrc commented 3 years ago

Anyone tried syncing with i3.x2large aws instance. They have local NVMe stores.

It's good , we have multiple full nodes running on it without txlimit pruning

That's great to know! Thanks! Do you mind sharing your config??

I meant i3en.2xlarge, but i think i3.2xlarge will do too, because I have multiple i3en.xlarge too , they are still in sync (but mind your disk, it won't sustain a few months , because data is ~1.3-1.4T now), just i3en.xlarge would have cpu becoming bottlenect when serving heavy rpc call then will fall behind

i3en.xlarge | 4 | 32 | 1 x 2,500 GB  nvme
i3en.2xlarge | 8 | 64 | 2 x 2,500 GB nvme
i3.2xlarge | 8 | 61 | 1 x 1.9tb NVMe SSD

we run with --syncmode=fast --gcmode=full --snapshot=false --txlookuplimit=0 --cache.preimages on i3en.2xlarge for those delayed behind, they can catch up the gap gradually , expect using another 1 min for 1 min delay after p2p "warmed" up it doesn't matter with v1.1.0-beta/1.1.1-beta, Prefer go for 1.1.1-beta directly for new setup

apogiatzis commented 3 years ago

Anyone tried syncing with i3.x2large aws instance. They have local NVMe stores.

It's good , we have multiple full nodes running on it without txlimit pruning

That's great to know! Thanks! Do you mind sharing your config??

I meant i3en.2xlarge, but i think i3.2xlarge will do too, because I have multiple i3en.xlarge too , they are still in sync (but mind your disk, it won't sustain a few months , because data is ~1.3-1.4T now), just i3en.xlarge would have cpu becoming bottlenect when serving heavy rpc call then will fall behind

i3en.xlarge | 4 | 32 | 1 x 2,500 GB  nvme
i3en.2xlarge | 8 | 64 | 2 x 2,500 GB nvme
i3.2xlarge | 8 | 61 | 1 x 1.9tb NVMe SSD

we run with --syncmode=fast --gcmode=full --snapshot=false --txlookuplimit=0 --cache.preimages on i3en.2xlarge for those delayed behind, they can catch up the gap gradually , expect using another 1 min for 1 min delay after p2p "warmed" up it doesn't matter with v1.1.0-beta/1.1.1-beta, Prefer go for 1.1.1-beta directly for new setup

Thanks for the information! I will try it out!

Crypto2 commented 3 years ago

May want to put the snapshots on more or faster servers? Going to take 4 days to download which is nuts to get a node synced :(

lovelyrrg51 commented 3 years ago

Anyone tried syncing with i3.x2large aws instance. They have local NVMe stores.

It's good , we have multiple full nodes running on it without txlimit pruning

That's great to know! Thanks! Do you mind sharing your config??

I meant i3en.2xlarge, but i think i3.2xlarge will do too, because I have multiple i3en.xlarge too , they are still in sync (but mind your disk, it won't sustain a few months , because data is ~1.3-1.4T now), just i3en.xlarge would have cpu becoming bottlenect when serving heavy rpc call then will fall behind

i3en.xlarge | 4 | 32 | 1 x 2,500 GB  nvme
i3en.2xlarge | 8 | 64 | 2 x 2,500 GB nvme
i3.2xlarge | 8 | 61 | 1 x 1.9tb NVMe SSD

we run with --syncmode=fast --gcmode=full --snapshot=false --txlookuplimit=0 --cache.preimages on i3en.2xlarge for those delayed behind, they can catch up the gap gradually , expect using another 1 min for 1 min delay after p2p "warmed" up it doesn't matter with v1.1.0-beta/1.1.1-beta, Prefer go for 1.1.1-beta directly for new setup

Not same between i3en.2xlarge and i3.2xlarge i3en.2xlarge: Network Speed: Up to 25GB ie.2xlarge: Network Speed: Up to 10GB

The network performance is most important when building all types of Nodes so that ie3n.2xlarge is the best. @jun0tpyrc so your node is working well now, without any issues with v1.1.1-beta?

jayboy-mabushi commented 3 years ago

@kgcdream2019 can you let us when you are synced?

kgcdream2019 commented 3 years ago

@jayboy-mabushi. Yes, I'll let you know. My archive nodes are still out of sync and the difference in block numbers is getting bigger and bigger over time. Now the block is about 2 days 3 hours later.

jayboy-mabushi commented 3 years ago

did anyone manage to sync? what config parameters did you use?

lovelyrrg51 commented 3 years ago

just synced with i3en.2xlarge instance type in 16 hours on version 1.1.1-beta image

litebarb commented 3 years ago

Hi guys, I just downloaded the pre build binary from https://github.com/binance-chain/bsc/releases/download/v1.1.1-beta/geth_linux, however when I try to run geth_linux I am getting this error: "geth_linux: command not found". Any ideas? I wasn't getting this error for previous 1.1.0 stable and beta binary releases.

lovelyrrg51 commented 3 years ago

@litebarb please download https://github.com/binance-chain/bsc with master branch and then make geth.

litebarb commented 3 years ago

@lovelyrrg51 thanks, looks like that worked! do you mind sharing with us your config? Was your starting config syncmode=full with snapshot? Or was it syncmode=fast from scratch?

apogiatzis commented 3 years ago

just synced with i3en.2xlarge instance type in 16 hours on version 1.1.1-beta image

with snapshot??

lovelyrrg51 commented 3 years ago

No

lovelyrrg51 commented 3 years ago

@lovelyrrg51 thanks, looks like that worked! do you mind sharing with us your config? Was your starting config syncmode=full with snapshot? Or was it syncmode=fast from scratch?

syncmode is fast

Crypto2 commented 3 years ago

@litebarb - That just means it's not marked executable: chmod +x geth_linux

litebarb commented 3 years ago

@lovelyrrg51 possible to share your config.toml file? Been running for at least 18h right now.

@Crypto2 thanks!

litebarb commented 3 years ago

Currently still importing new states after 26hours for version 1.1.1 beta. Can someone who has managed to fast-synce up share their config file?

apogiatzis commented 3 years ago

Finally managed to sync a node using i3en.2xlarge instance in fast syncmode using default config guys. It synced in less than 24 hours

bytesiz commented 3 years ago

I was also able to sync with the i3en.2xlarge using all default settings/commands and the most recent snapshot. Took approximately 21 hours.

litebarb commented 3 years ago

@bytesiz This is with syncmode=full?

bytesiz commented 3 years ago

This is what I ran: ./build/bin/geth --config ./config.toml --datadir ./node --cache 18000 --rpc.allow-unprotected-txs --txlookuplimit 0

No classification for syncmode, im assuming it defaults to full...am I wrong?

apogiatzis commented 3 years ago

This is what I ran: ./build/bin/geth --config ./config.toml --datadir ./node --cache 18000 --rpc.allow-unprotected-txs --txlookuplimit 0

No classification for syncmode, im assuming it defaults to full...am I wrong?

I think default is "fast"

offerm commented 3 years ago

Thanks for the info here. Took me 12 hours to sync on an i3en.2xlarge model.

lovelyrrg51 commented 3 years ago

@lovelyrrg51 possible to share your config.toml file? Been running for at least 18h right now.

@Crypto2 thanks!

@litebarb I don't build BSC Node using config.toml. Instead, run geth as daemon type.

ExecStart=/usr/bin/geth --networkid=56 --http.addr=0.0.0.0 --http.port 8545 --syncmode=fast --cache=1024 --http --http.api admin,eth,debug,miner,net,txpool,personal,web3 --datadir /home/ubuntu/.bsc/mainnet

It's same type with ETH Node. But You should run geth --datadir node init genesis.json with genesis.json which binance already provided.

litebarb commented 3 years ago

@lovelyrrg51 thanks.

I decided to stop fast sync and try snapshot with full sync instead. However my process stops after a while with just 'killed'. Private sever was not turned off. Anyone knows the reason?

`INFO [08-02|08:30:35.648] Imported new chain segment blocks=7 txs=3881 mgas=598.093 elapsed=8.477s mgasps=70.549 number=9,549,005 hash=adbc04..0ed8d4 age=4d10h45m dirty=1.51GiB INFO [08-02|08:30:44.633] Imported new chain segment blocks=10 txs=4679 mgas=773.095 elapsed=8.984s mgasps=86.045 number=9,549,015 hash=d6cc5d..450adc age=4d10h45m dirty=1.51GiB INFO [08-02|08:30:52.934] Imported new chain segment blocks=8 txs=3764 mgas=679.957 elapsed=8.300s mgasps=81.916 number=9,549,023 hash=b9a011..6b4d51 age=4d10h44m dirty=1.52GiB INFO [08-02|08:31:02.444] Imported new chain segment blocks=11 txs=4874 mgas=816.739 elapsed=9.510s mgasps=85.878 number=9,549,034 hash=8f8a52..642007 age=4d10h44m dirty=1.52GiB INFO [08-02|08:31:10.870] Imported new chain segment blocks=8 txs=4324 mgas=676.254 elapsed=8.426s mgasps=80.257 number=9,549,042 hash=b21f19..924acd age=4d10h43m dirty=1.53GiB INFO [08-02|08:31:11.839] Deep froze chain segment blocks=60 elapsed=99.552ms number=9,459,042 hash=4bf901..0df6f9 INFO [08-02|08:31:18.934] Imported new chain segment blocks=9 txs=4209 mgas=659.766 elapsed=8.063s mgasps=81.817 number=9,549,051 hash=b9509f..392be9 age=4d10h43m dirty=1.53GiB INFO [08-02|08:31:27.270] Imported new chain segment blocks=9 txs=4518 mgas=690.856 elapsed=8.336s mgasps=82.875 number=9,549,060 hash=50d480..38d8fc age=4d10h43m dirty=1.54GiB INFO [08-02|08:31:35.523] Imported new chain segment blocks=8 txs=4321 mgas=676.056 elapsed=8.252s mgasps=81.918 number=9,549,068 hash=f174d1..7479a3 age=4d10h42m dirty=1.54GiB INFO [08-02|08:31:43.756] Imported new chain segment blocks=12 txs=5378 mgas=823.584 elapsed=8.232s mgasps=100.042 number=9,549,080 hash=25dc99..a06f4e age=4d10h42m dirty=1.54GiB INFO [08-02|08:31:52.044] Imported new chain segment blocks=8 txs=3762 mgas=609.295 elapsed=8.288s mgasps=73.511 number=9,549,088 hash=55ee8d..7dbb27 age=4d10h42m dirty=1.55GiB INFO [08-02|08:32:00.726] Imported new chain segment blocks=11 txs=5165 mgas=840.979 elapsed=8.681s mgasps=96.865 number=9,549,099 hash=3aefb5..1dcf05 age=4d10h41m dirty=1.55GiB INFO [08-02|08:32:08.881] Imported new chain segment blocks=8 txs=3927 mgas=641.895 elapsed=8.155s mgasps=78.710 number=9,549,107 hash=e665ba..e4f105 age=4d10h41m dirty=1.55GiB INFO [08-02|08:32:11.929] Deep froze chain segment blocks=69 elapsed=89.124ms number=9,459,111 hash=d2c5b1..cb1373 INFO [08-02|08:32:16.963] Imported new chain segment blocks=10 txs=4732 mgas=761.100 elapsed=8.081s mgasps=94.180 number=9,549,117 hash=bf860c..cf4253 age=4d10h41m dirty=1.55GiB INFO [08-02|08:32:25.501] Imported new chain segment blocks=7 txs=3277 mgas=541.315 elapsed=8.538s mgasps=63.399 number=9,549,124 hash=9acf28..332e7c age=4d10h40m dirty=1.56GiB Killed

lovelyrrg51 commented 3 years ago

Actually, without snapshot, it works well. :)

litebarb commented 3 years ago

@lovelyrrg51 yeah for some reason I can't get it to work, even with sufficient hardware. I have hynix p31 gold nvme, 32gb ram, and 6 cores, it should work. How was it that you managed to sync with 1024mb of cache. lol. not sure what I am doing wrong, I hope someone can point it out to me.

lovelyrrg51 commented 3 years ago

As my experience, the RAM should be 64GB at least with BSC Node.

zy2d commented 3 years ago

As my experience, the RAM should be 64GB at least with BSC Node.

BSC的内存占用并不高 image this server 32G mem

noprom commented 3 years ago

In addition, have you tried pruning data? Pruning some data to reduce disk usage may help.

How to pruning data to reduce disk usage?

dwjorgeb commented 3 years ago

is it normal that my BSC node is consuming over 1.4Tb of SSD space in just over 2 weeks of running?

litebarb commented 3 years ago

Hi guys, anyone knows how to: 1) check if peers are slow 2) find more peers manually to add that is close to your region

would appreciate if someone can help! I suspect my sycning issue is the problem with peers.

stenleegunz commented 3 years ago

did anyone manage to sync?

stenleegunz commented 3 years ago

Description

In the 24 hours of July 28, Binance Smart Chain (BSC) processed 12.9 million transactions. This number and the below numbers are all from the great BSC network explorer bscscan.com powered by the Etherscan team.

This means 150 transactions per second (TPS) processed on the mainnet, not in isolated environment tests or white paper. If we zoom in, we will also notice that these were not light transactions as BNB or BEP20 transfers, but heavy transactions, as many users were "fighting" each other in the “Play and Earn”, which is majorly contributed by GameFi dApps from MVBII.

The total gas used on July 28 was 2,052,084 million. If all these were for a simple BEP20 transaction that typically cost 50k gas, it could cover 41 millions transactions, and stand for 470 TPS.

On the other hand, with the flood of volume, the network experienced congestion on July 28 for about 4 hours, and many low spec or old version nodes could not catch up with processing blocks in time.

Updates

A new version of beta client is released which has better performance in order to handle the high volume. Please feel free to upgrade and raise bug reports if you encounter any. Please note this is just a beta version, some known bug fix is on the way. Click here to download the beta client.

To improve the performance of nodes and achieve faster block times, we recommend the following specifications.

  • validator:

    • 2T GB of free disk space, solid-state drive(SSD), gp3, 8k IOPS, 250MB/S throughput, read latency <1ms.
    • 12 cores of CPU and 48 gigabytes of memory (RAM)
    • m5zn.3xlarge instance type on AWS, or c2-standard-8 on Google cloud.
    • A broadband Internet connection with upload/download speeds of 10 megabyte per second
  • fullnode:

    • 1T GB of free disk space, solid-state drive(SSD), gp3, 3k IOPS, 125MB/S throughput, read latency <1ms. (if start with snap/fast sync, it will need NVMe SSD)
    • 8 cores of CPU and 32 gigabytes of memory (RAM).
    • c5.4xlarge instance type on AWS, c2-standard-8 on Google cloud.
    • A broadband Internet connection with upload/download speeds of 5 megabyte per second

If you don’t need an archive node, choose the latest snapshot and rerun from scratch from there.

Problems

  • Fast/snap sync mode cannot catch up with the current state data.
  • Full sync cannot catch up with the current block.
  • High CPU usage.

Suggestions

  • Use the latest released binary version.
  • Don't use fast/snap sync for now, use the snapshot we provide to run full sync.
  • Confirm your hardware is sufficient, you can refer to our official documents (we will update if there are new discoveries).
  • Regularly prune data to reduce disk pressure.
  • Make sure the peer you connect to is not too slow.

Reference PRs

We will update this board, If there are any updates. If you have a suggestion or want to propose some improvements, please visit our Github. If you encounter any synchronization issues, please report them here.

I use recommend specifications, but full sync not working, As far as I can see from the logs, when the node is initialized, it switches from full to fast mode, but it does not switch back, as if it works in archive mode, it reaches a block difference of up to 64, then it accumulates up to 128 then an instant difference of 64 occurs, and so on circle, the iron is used what you recommended, I follow the official guide, but everything is the same from time to time, enough to feed people with low iron and something like that give a clear answer or provide your personal configuration with which it works at the moment

jayboy-mabushi commented 3 years ago

@stenleegunz use --syncmode snap that worked for me and for others as well

Crypto2 commented 3 years ago

I downloaded the snapshot which took 4 days so was 5 days behind, from there is syncs in full mode and it's still over 2 days behind now. So maybe 7-8 days to sync a node :(

litebarb commented 3 years ago

Hi everyone, just some update. I managed to sync up with downloaded snapshot from here with SyncMode = "full" configuration. Took about 45 hours in total. Initially I tried syncmode = "fast" but failed. I'm using geth 1.1.1 beta.

Would appreciate it if someone could explain why fast sync didn't work for me...states download never stopped even after 48 hours.

Anyways, here's the configuration that worked for me on syncmode = 'full'. Hopefully it helps you.

config.toml:

[Eth]
NetworkId = 56
SyncMode = "full"
NoPruning = false
NoPrefetch = false
LightPeers = 100
UltraLightFraction = 75
TrieTimeout = 100000000000
EnablePreimageRecording = false
EWASMInterpreter = ""
EVMInterpreter = ""

[Eth.Miner]
GasFloor = 30000000
GasCeil = 40000000
GasPrice = 1000000000
Recommit = 10000000000
Noverify = false

[Eth.TxPool]
Locals = []
NoLocals = true
Journal = "transactions.rlp"
Rejournal = 3600000000000
PriceLimit = 1000000000
PriceBump = 10
AccountSlots = 512
GlobalSlots = 10000
AccountQueue = 256
GlobalQueue = 5000
Lifetime = 10800000000000

[Eth.GPO]
Blocks = 20
Percentile = 60
OracleThreshold = 20

[Node]
IPCPath = "geth.ipc"
HTTPHost = "localhost"
NoUSB = true
InsecureUnlockAllowed = false
HTTPPort = 8545
HTTPVirtualHosts = ["localhost"]
HTTPModules = ["eth", "net", "web3", "txpool", "parlia"]
WSPort = 8546
WSModules = ["net", "web3", "eth"]

[Node.P2P]
MaxPeers = 1000
NoDiscovery = false
BootstrapNodes = ["enode://1cc4534b14cfe351ab740a1418ab944a234ca2f702915eadb7e558a02010cb7c5a8c295a3b56bcefa7701c07752acd5539cb13df2aab8ae2d98934d712611443@52.71.43.172:30311","enode://28b1d16562dac280dacaaf45d54516b85bc6c994252a9825c5cc4e080d3e53446d05f63ba495ea7d44d6c316b54cd92b245c5c328c37da24605c4a93a0d099c4@34.246.65.14:30311","enode://5a7b996048d1b0a07683a949662c87c09b55247ce774aeee10bb886892e586e3c604564393292e38ef43c023ee9981e1f8b335766ec4f0f256e57f8640b079d5@35.73.137.11:30311"]
StaticNodes = ["enode://f3cfd69f2808ef64838abd8786342c0b22fdd28268703c8d6812e26e109f9a7cb2b37bd49724ebb46c233289f22da82991c87345eb9a2dadeddb8f37eeb259ac@18.180.28.21:30311","enode://ae74385270d4afeb953561603fcedc4a0e755a241ffdea31c3f751dc8be5bf29c03bf46e3051d1c8d997c45479a92632020c9a84b96dcb63b2259ec09b4fde38@54.178.30.104:30311","enode://d1cabe083d5fc1da9b510889188f06dab891935294e4569df759fc2c4d684b3b4982051b84a9a078512202ad947f9240adc5b6abea5320fb9a736d2f6751c52e@54.238.28.14:30311","enode://f420209bac5324326c116d38d83edfa2256c4101a27cd3e7f9b8287dc8526900f4137e915df6806986b28bc79b1e66679b544a1c515a95ede86f4d809bd65dab@54.178.62.117:30311","enode://c0e8d1abd27c3c13ca879e16f34c12ffee936a7e5d7b7fb6f1af5cc75c6fad704e5667c7bbf7826fcb200d22b9bf86395271b0f76c21e63ad9a388ed548d4c90@54.65.247.12:30311","enode://f1b49b1cf536e36f9a56730f7a0ece899e5efb344eec2fdca3a335465bc4f619b98121f4a5032a1218fa8b69a5488d1ec48afe2abda073280beec296b104db31@13.114.199.41:30311","enode://4924583cfb262b6e333969c86eab8da009b3f7d165cc9ad326914f576c575741e71dc6e64a830e833c25e8c45b906364e58e70cdf043651fd583082ea7db5e3b@18.180.17.171:30311","enode://4d041250eb4f05ab55af184a01aed1a71d241a94a03a5b86f4e32659e1ab1e144be919890682d4afb5e7afd837146ce584d61a38837553d95a7de1f28ea4513a@54.178.99.222:30311","enode://b5772a14fdaeebf4c1924e73c923bdf11c35240a6da7b9e5ec0e6cbb95e78327690b90e8ab0ea5270debc8834454b98eca34cc2a19817f5972498648a6959a3a@54.170.158.102:30311","enode://f329176b187cec87b327f82e78b6ece3102a0f7c89b92a5312e1674062c6e89f785f55fb1b167e369d71c66b0548994c6035c6d85849eccb434d4d9e0c489cdd@34.253.94.130:30311","enode://cbfd1219940d4e312ad94108e7fa3bc34c4c22081d6f334a2e7b36bb28928b56879924cf0353ad85fa5b2f3d5033bbe8ad5371feae9c2088214184be301ed658@54.75.11.3:30311","enode://c64b0a0c619c03c220ea0d7cac754931f967665f9e148b92d2e46761ad9180f5eb5aaef48dfc230d8db8f8c16d2265a3d5407b06bedcd5f0f5a22c2f51c2e69f@54.216.208.163:30311","enode://352a361a9240d4d23bb6fab19cc6dc5a5fc6921abf19de65afe13f1802780aecd67c8c09d8c89043ff86947f171d98ab06906ef616d58e718067e02abea0dda9@79.125.105.65:30311","enode://bb683ef5d03db7d945d6f84b88e5b98920b70aecc22abed8c00d6db621f784e4280e5813d12694c7a091543064456ad9789980766f3f1feb38906cf7255c33d6@54.195.127.237:30311","enode://11dc6fea50630b68a9289055d6b0fb0e22fb5048a3f4e4efd741a7ab09dd79e78d383efc052089e516f0a0f3eacdd5d3ffbe5279b36ecc42ad7cd1f2767fdbdb@46.137.182.25:30311","enode://21530e423b42aed17d7eef67882ebb23357db4f8b10c94d4c71191f52955d97dc13eec03cfeff0fe3a1c89c955e81a6970c09689d21ecbec2142b26b7e759c45@54.216.119.18:30311","enode://d61a31410c365e7fcd50e24d56a77d2d9741d4a57b295cc5070189ad90d0ec749d113b4b0432c6d795eb36597efce88d12ca45e645ec51b3a2144e1c1c41b66a@34.204.129.242:30311","enode://bb91215b1d77c892897048dd58f709f02aacb5355aa8f50f00b67c879c3dffd7eef5b5a152ac46cdfb255295bec4d06701a8032456703c6b604a4686d388ea8f@75.101.197.198:30311","enode://786acbdf5a3cf91b99047a0fd8305e11e54d96ea3a72b1527050d3d6f8c9fc0278ff9ef56f3e56b3b70a283d97c309065506ea2fc3eb9b62477fd014a3ec1a96@107.23.90.162:30311","enode://4653bc7c235c3480968e5e81d91123bc67626f35c207ae4acab89347db675a627784c5982431300c02f547a7d33558718f7795e848d547a327abb111eac73636@54.144.170.236:30311","enode://c6ffd994c4ef130f90f8ee2fc08c1b0f02a6e9b12152092bf5a03dd7af9fd33597d4b2e2000a271cc0648d5e55242aeadd6d5061bb2e596372655ba0722cc704@54.147.151.108:30311","enode://99b07e9dc5f204263b87243146743399b2bd60c98f68d1239a3461d09087e6c417e40f1106fa606ccf54159feabdddb4e7f367559b349a6511e66e525de4906e@54.81.225.170:30311","enode://1479af5ea7bda822e8747d0b967309bced22cad5083b93bc6f4e1d7da7be067cd8495dc4c5a71579f2da8d9068f0c43ad6933d2b335a545b4ae49a846122b261@52.7.247.132:30311","enode://43562d35f274d9e93f5ccac484c7cb185eabc746dbc9f3a56c36dc5a9ef05a3282695de7694a71c0bf4600651f49395b2ee7a6aaef857db2ac896e0fcbe6b518@35.73.15.198:30311","enode://08867e57849456fc9b0b00771f53e87ca6f2dd618c23b34a35d0c851cd484a4b7137905c5b357795025b368e4f8fe4c841b752b0c28cc2dbbf41a03d048e0e24@35.74.39.234:30311"]
ListenAddr = ":30311"
EnableMsgEvents = false

[Node.HTTPTimeouts]
ReadTimeout = 30000000000
WriteTimeout = 30000000000
IdleTimeout = 120000000000

Geth command:

./geth --config ./config.toml --datadir ./node  --cache 18000 --rpc.allow-unprotected-txs --txlookuplimit 0 --gcmode=full --snapshot=false