Closed idotial closed 1 year ago
I don't exactly know "how long" it would take, but it's suggested to let it run for as long as it takes. It took me 12-15 hours on my 8-core i7 machine with --cache
set to 10 GB of RAM.
@asgs I have read issues before, they don't make any sense. I just want to know how long I have to wait, and should I stop it. yesterday eth.syncing return false, and few minutes later, it print like "{ currentBlock: 4685483, highestBlock: 4685675, knownStates: 51479249, pulledStates: 51462166, startingBlock: 4681817 }" again. should I stop it, and update geth and mist? I see there is a new version 1.7.3.
@idotial how long have u been waiting and do u have an ssd/fast internet? No u probably shouldnt stop it, but you can try... it took me like 6 hours on importing the states part yesterday. also how much ram do u have? maybe thats an issue
i have the same problem i am using Version: 1.7.3-stable
on OSX i used geth syncmode="light"
in terminal.
ive got 251 GB Flash Storage and 8gb of ram
here is theeth.syncing
so far
currentBlock: 4762485,
highestBlock: 4762629,
knownStates: 14708481,
pulledStates: 14675786,
startingBlock: 0
The problem is i dont have enough disk space or i have 7GB and im not sure weather to throw away my old chain.data
and trust it will be ok. Because if it doesnt work i will have to do it again and it still might not work and i wont have any chain.data
All this to get my DAO withdrawal contract
to work.
How much space did you need to finish the sync and did it work?
Did it ever finish?
@xstalpha It's still running, but it start to Imported new chain segment, so I guest it's alright.
@keznerve just stop it. you can add an Portable Hard Disk, and rerun it.
@ugotjelly I stop it, and restart it few times. and now the log is 'Imported new chain segment', so maybe it's will finish in several days.
Hi yes it finished ...it went up to 38GB on the 19th ofDec it got me to clear out all the crap on my Mac, to save space ; the Import new chain segment
lasted a a few hours it all made sense in the end , it had more warnings towards the finish but i didnt stop it. The whole thing took around 15hrs . I havent done it since. I did get my ETH from the withdraw DAO contract.
@keznerve congratulation! It seems you have better network
In my case millions of Imported new state entries
happen at the ens of fast
sync. The import works fine for some time, then slows down to almost stop while continuously dropping peers Stalling state sync, dropping peer
.
I have tried to serve light peers, but in that case import of the state always fails after some time with error Node data write error
. So I disabled --lightserv
and ended up with the issue described above.
I run dedicated node in cloud, with SSD and fast internet. Cache is 2GB (--cache 2048
).
I was also seeing this with current master (1.8.0). 1.7.3 is pulling in chain state properly however is too slow to catch up on non-SSD disks.
I think this PR is going to be a big win.
I just finished fast syncing with Geth 1.7.3 64-bit for Windows using an SSD and wanted to leave this data point for others. It took a little more than 2 days.
eth.syncing
{
currentBlock: 4999894,
highestBlock: 4999897,
knownStates: 80999148,
pulledStates: 80999148,
startingBlock: 4999894
}
Yep, finished syncing just short of block 5,000,000. That's about 81 million known states for 5 million blocks. The database size on disk is 66,323,816,699. I kept getting crashes using the 1.7.2 that came with the wallet even going to an SSD. Parity in warp mode was also crashing.
thanks much
81 million known states for 5 million blocks 😱 😨
Currently at currentBlock=5071336 and knownStates=66991785
At least I know there's some way to go yet...
Been at this for ~8 hours on an Amazon EC2 m5.4xlarge.
@hynese did it finish?
Haven't had a chance to check yet. I was up over 80 million state at midnight last night (15 hours ago) and was still at eth.blockNumber=0 then.
Yes, I'm in sync now.
Finally...
Several hours to go from 0 to ~85 million knownStates
Appreciate you guys letting us know how many ironically named "known states" there are :-D
@martindye it finished for (took 15 hours). Unfortunately after download that metrics are hidden, but i think last number of states i saw was 100M
On a machine with slow disk subsystem and the new trie cache (merged in v1.8.0) it took six days to perform fast sync. Before the new cache has been implemented, the node was not able to stay in sync. State entries import stopped just before 100M mark:
INFO [02-17|18:26:09] Imported new state entries count=221 elapsed=3.390ms processed=99338822 pending=0 retry=0 duplicate=33987 unexpected=139311
INFO [02-17|18:26:09] Imported new block receipts count=0 elapsed=343.488µs number=5081106 hash=bae2fd…909c39 size=0.00B ignored=1
INFO [02-17|18:26:09] Committed new head block number=5081106 hash=bae2fd…909c39
is still being updated... { currentBlock: 5204440, highestBlock: 5211269, knownStates: 100913700, pulledStates: 100893264, startingBlock: 0 }
i downloaded the Geth 1.8.1 on ethereum wallet and got locked out...unfortunately i didnt have any luck after December running fast client. I have had to ditch the Node for now. Hope it all works out soon .
INFO [04-17|10:02:46] Imported new state entries count=0 elapsed=729.752ms processed=121292080 pending=2995 retry=16 duplicate=26888 unexpected=32477 INFO [04-17|10:02:46] Imported new state entries count=200 elapsed=17.6µs processed=121292280 pending=2995 retry=16 duplicate=26888 unexpected=32477 INFO [04-17|10:02:50] Imported new state entries count=0 elapsed=164.846ms processed=121292280 pending=3186 retry=92 duplicate=26981 unexpected=32477 INFO [04-17|10:02:51] Imported new block headers count=0 elapsed=86.325µs number=5457364 hash=3054b0…74d7ae ignored=1 WARN [04-17|10:03:00] Synchronisation failed, retrying err="block body download canceled (requested)" INFO [04-17|10:03:34] Imported new block headers count=2 elapsed=7.211ms number=5457378 hash=fb48cd…4ba488 ignored=79 INFO [04-17|10:03:35] Imported new block headers count=2 elapsed=5.819ms number=5457380 hash=67b961…83eceb ignored=0 what is hapening?
approximately 130 millions, three weeks ago(state entries)
Just finished today, the last knownStates I saw is 130622327
As for 8th May 2018, with geth 1.8.7-stable
you'll need 87GB and have downloaded 134526609
"knownStates" for a t2.medium instance on AWS with their default SSD. The fast sync completed in about 42hours. Hope that helps some people !
Just wanted to say thanks to the last 3 responders for actually giving a number! Every other thread on this topic just tells people to run in light mode, use MEW, etc, etc. At least I know what I'm in for here (I'm chugging along at 129mm states at the moment, hoping the target is about 140mm now). I'll endeavor to post the most recent state number when complete. Would be nice if this number could be found somewhere (like etherscan).
The answer: As of May 18, it finally completed at 140,936,000
just want to know say for an instance of 8Gb of memory, what the cache setting should be? 4096? my current eth.syncing result is { currentBlock: 5637459, highestBlock: 5637534, knownStates: 51360598, pulledStates: 51336172, startingBlock: 4363788 } already running for 20 hours. and yesterday, the disk IO was 180/170 write/read per second, the cpu went up 60-70%, now it's almost like it's doing nothing, everything is low.
so when it's pulling states (instead of blocks), it almost seems like the current geth (1.8.8 stable) is not trying very hard to complete the job...
process: geth --syncmode fast --cache 2048 --rpc hardware: ec2, t2.large instance which has 8gb of memory and 2 cores.
Update on 5/20/2018: I've restarted the geth process and it'd improved, at least the disk IO shows it's active now. eth syncing returns { currentBlock: 5643662, highestBlock: 5643742, knownStates: 133445120, pulledStates: 133438316, startingBlock: 5643648 } I think it's almost done now, the knownstates should be around 14xxxxxxx
Update: sync achieved, finally. I however can't really tell the pulledstates value because now eth.syncing returns false. anybody can tell me how to get pulledstates after you achieve sync status? the last saw was 14xxxxxxx. but now I don't know how to get pulledstates count anymore.
Geth is leaking memory in my setup, have ready a wrapper script to restart it upon failure. T2 will run out of CPU credits unless you switch on T2 unlimited. Cache 2048 is fine. Very important is EBS throughput, you will also run out of EBS credits with geth on gp2 volume.
@shawhu As you have indicated it is doing almost nothing now, I guess you might hit some of the limits mentioned above.
05/28 Finished a --syncmode "fast" after 7 1/2 days using geth 1.8.8
currentBlock: 5691828, highestBlock: 5691837, knownStates: 153542316, pulledStates: 153542316,
93.4GB on Disk in my /roaming/ethereum folder
i made a quick script to run in your geth console to check how fast your update is happening
https://gist.github.com/Beasta/695612bfc856450353cd6710dbfe22bb
I'm running a 24 core, 128 gb ram digital ocean droplet and getting about roughly 5000-6000 states/sec update speed (roughly 8 hours to sync if 150 million states). Seems like its rarely breaking 50% ram usage so if I was to do it again, I'd do a droplet with 64 gb of ram. Once its synced, i'll pass the chaindata folder to a smaller machine.
I did actually have it fill all 128 gb once and then crash ... memory leak maybe?
On a local machine with 4 cores and 8 gb ram and a hard disk (not SSD) i'm getting 10 states/sec update speed :/
Just give some information:
I started the syncing from May 18, It just ended minutes ago.
BlockNumber: 5740903 pulledStates: 191247645
@Beasta Thanks! Curious,
1/ Do you do the switch to the smaller machine manually ? 2/ What happen when you run out of space? Do you restart the process again?
@erickhun I add an external volume thats 250 gb. Once its synced on the big machine, I detach the volume and move it to the small machine. I just use --datadir command to point geth to the external volume. I have yet to overfill the 250 gb drive but when it happens i'll probably just move it to a larger volume.
Using the script i posted above to calculate states/second update speed, my single core 1 gig of ram digital ocean droplet is doing roughly 400 sates/sec.
The upper bound is 196408701. Ended today, july 5 2018. The chaindata folder size is 115 GB.
Just give some information:
The upper bound is 181012156 Ended today, July 18 2018 The chaindata folder size is 108GB
maybe because i synced with more than one version and stopped a few months ago and restarted these days. Maybe this is the reason i have different state entries and chain data folder.
I am running --fast --cache 12288 on dedicated datacenter server for 8 days
currently
block height 6191990 state entries 204 411 181
I hope it will be synced soon
@nklak any luck? running 1wk, up to block 6203435 states 203,927,275
No, currently on 209,815,176 state entries...it is slowing down so I guess we are close. Anybody recently synced?, can you give us a number...
...2 days later... block 6216181, knownStates 209902865 ...still crunching
I am on 217.237.711..no luck so far...
insane.
Insane is when you try to run geth with --syncmode "full" flag. On block 2.6kk the data saving slower, than blocks creating.
the mythical "fully sync'ed ethereum node". 215,817,528 knownstates and going...
I am on 225.538.765 and it is obvious that it is having problem keeping up. I am running the node on Virtual machine on 2CPU and 24GB RAM on 2x6TB enterprise SATA 7k2 HDD in RAID1.
In meanwhile I created cloud instance with 4vCPU, 32GB RAM and 240GB nvme SSD, and after running for 6 hours I am already on 100.000.000 entries and it can bee seen that is much much faster . If nodes on HDD have problem to keep up, I will turn it to SSD, but I am limited in size. If fast sync node is about 110-120GB now, how will it grow after sync? I know that full sync is over 800GB now. Can anybody advise?
@nklak how long have you been running geth to get to 225? can you specify your command line params as well
On server with HDD I am running geth for 18 days now...currently at 226.000.000. It takes now about one day for 4-5 millions entries.
On server with SSD I am running for 12h now and I am already at 156.000.000 entries...It is obvious that with SSD is much much faster. When I hit the max number of state entries on SSD I will post the number and total time needed.
System information
Geth version:
1.7.2
OS & Version: OSXExpected behaviour
geth --fast should finish soon.
Actual behaviour
it run for 3days and print “Imported new state entries count=384 elapsed=26.970ms processed=50023987 pending=33074 retry=0 duplicate=19087 unexpected=47765” constantly
Steps to reproduce the behaviour
run geth --fast in console
Backtrace
INFO [12-06|07:08:00] Imported new state entries count=1259 elapsed=12.971ms processed=50014526 pending=34891 retry=0 duplicate=19087 unexpected=47765 INFO [12-06|07:08:23] Imported new state entries count=774 elapsed=8.950ms processed=50015300 pending=34311 retry=0 duplicate=19087 unexpected=47765 INFO [12-06|07:08:31] Imported new state entries count=1125 elapsed=9.513ms processed=50016425 pending=33428 retry=0 duplicate=19087 unexpected=47765 INFO [12-06|07:08:39] Imported new state entries count=1061 elapsed=11.198ms processed=50017486 pending=32566 retry=0 duplicate=19087 unexpected=47765 INFO [12-06|07:08:49] Imported new state entries count=1314 elapsed=12.041ms processed=50018800 pending=31248 retry=0 duplicate=19087 unexpected=47765 INFO [12-06|07:09:10] Imported new state entries count=1028 elapsed=10.446ms processed=50019828 pending=30496 retry=0 duplicate=19087 unexpected=47765 INFO [12-06|07:09:25] Imported new state entries count=1241 elapsed=10.423ms processed=50021069 pending=30088 retry=0 duplicate=19087 unexpected=47765 INFO [12-06|07:09:37] Imported new state entries count=777 elapsed=6.224ms processed=50021846 pending=29851 retry=26 duplicate=19087 unexpected=47765