Open quintana115 opened 7 months ago
I have the same issue here. Stuck at 1. Have you figured it out yet?
It seems to have happened with one of the latest Windows updates.
Same issue here, and even after I reinstalled Windows 11 and use 0.45 version, it was still stuck in 1. In addition, I have a computer node which is known working properly. I input the IP for that computer and still unable to sync.
same issue on Mac and Windows... docker versions 4.28.0 and 4.20.1. Clean install on Windows 10 pro.
I am having same exact issue with win 10 pro fresh install and the latest docker 4.29 and latest node app 4.10 going to keep searching to find solution and will report back to you guys when i find it and i will find it. Has been to long a road already to ever give up now..lol
same issue on Mac and Windows... docker versions 4.28.0 and 4.20.1. Clean install on Windows 10 pro.
ok... the container is running just fine on its own. Kill the app and don't turn off the switch and all is good... will sync and stay synched. Debug shows that something triggers the purge of all temp files periodically and the chain will never sync.
@aviscoreTH it worked!!! thank you so much now showinmg synched on Win10pro w/ newest pi node app 4.10 and newest docker 4.29
@aviscoreTH / @Chippleadder... Kill which app? The docker or the Pi app? No matter what I do, it does not sync.
And no, I haven't seen your red stapler @aviscoreTH .
Experiencing same issue. Windows 11 build 22631. Pi Node v0.4.10. Unsure what app to kill and what switch not to turn off, please clarify @aviscoreTH
Also, I tried the fix mentioned here (yes I know it's from 3y ago)- can anyone confirm setting a static IP here is a bad idea? No one addressed this in the OP and everyone said it Just Worked.
I meant this switch: Leave it on and just close the app. The container will continue and sync itself. At the moment there's an issue with node bonus (node.minepi.com) is unreachable. Possibly related to this issue. Docker 4.29.0 and Node 0.4.10 seem more stable as well.
I too have had this issue for a while now. I've tried Starting up the container only but that didn't help either. I have had the command prompt open (as administrator) and have been regularly running the following line for info on the container.
docker exec -it pi-consensus bash -c "stellar-core http-command info"
The one common thing I am noticing is the following line near the end:
"Catching up to ledger 16002111: downloading and verifying buckets: 19/21 (90%)"
The ledger number constantly changes but it seems to be stuck at 19/21 buckets for me. I would be interested to know if anyone else can check and see if they have the same. It seems there is a problem downloading and verifying the buckets. I have used different versions of Docker Desktop including the latest and 4.19. Also different versions of the PI app. the current one is 0.4.10. All with the same results.
All ports 31400 to 31409 are open and I have tested these externally too. I use a VPN with port forwarding and a fixed IP.
I am eager to find the solution to this. Perhaps my input can shed some light on the issue,
I have the exact same output- sitting at 19/21 buckets. I've been running the Node app for a few days now and my Local block number is still 1, but the Latest block number has increased so I know I'm getting updates on the blockchain. And I see a steady 7-8 outbound connections and 0-1 inbound connections on the troubleshooting page, so I know it's not a networking issue.
Yes,
That’s where it used to stop, until I closed the app to get the chain on sync. I recommend updating docker to 4.29.0 and node to 0.4.10 and then getting this in sync, so you avoid the issue of the chain not catching up.
On 19 Apr 2024, at 07:41, William Trelawny @.***> wrote:
I have the exact same output- sitting at 19/21 buckets. I've been running the Node app for a few days now and my Local block number is still 1, but the Latest block number has increased so I know I'm getting updates on the blockchain. And I see a steady 7-8 outbound connections and 0-1 inbound connections on the troubleshooting page, so I know it's not a networking issue.
— Reply to this email directly, view it on GitHubhttps://github.com/pi-node/instructions/issues/345#issuecomment-2065546418, or unsubscribehttps://github.com/notifications/unsubscribe-auth/AP7E2JCZDWSHDG3B4Y3UZNDY6BR2LAVCNFSM6AAAAABE2DXGBWVHI2DSMVQWIX3LMV43OSLTON2WKQ3PNVWWK3TUHMZDANRVGU2DMNBRHA. You are receiving this because you were mentioned.Message ID: @.***>
On the bottom of the Pi Node app, under the Block number, I have a whole new thing saying Unable to reach the network, provide IP/Domain of known node and an alt URL. This is getting way over my head. Any suggestions?
I have the exact same output- sitting at 19/21 buckets. I've been running the Node app for a few days now and my Local block number is still 1, but the Latest block number has increased so I know I'm getting updates on the blockchain. And I see a steady 7-8 outbound connections and 0-1 inbound connections on the troubleshooting page, so I know it's not a networking issue.
Yes the current block number is increasing for me. It's the local block number that remains at 1.
Yes, That’s where it used to stop, until I closed the app to get the chain on sync. I recommend updating docker to 4.29.0 and node to 0.4.10 and then getting this in sync, so you avoid the issue of the chain not catching up. On 19 Apr 2024, at 07:41, William Trelawny @.> wrote: I have the exact same output- sitting at 19/21 buckets. I've been running the Node app for a few days now and my Local block number is still 1, but the Latest block number has increased so I know I'm getting updates on the blockchain. And I see a steady 7-8 outbound connections and 0-1 inbound connections on the troubleshooting page, so I know it's not a networking issue. — Reply to this email directly, view it on GitHub<#345 (comment)>, or unsubscribehttps://github.com/notifications/unsubscribe-auth/AP7E2JCZDWSHDG3B4Y3UZNDY6BR2LAVCNFSM6AAAAABE2DXGBWVHI2DSMVQWIX3LMV43OSLTON2WKQ3PNVWWK3TUHMZDANRVGU2DMNBRHA. You are receiving this because you were mentioned.Message ID: @.>
I am currently running those version with the issue still present. I have removed blockchain data. Rebooted,. Even tried running the container without the app (although you are apparently supposed to run both for your node bonus to be calculated correctly so I've read). I've tried it through a VPN with prot forwarding on my VPN and a fixed IP and also without the VPN. (I used a fixed IP on my VPN because most service providers share IP addresses among their customers which can result in problems calculating node reward. Often goes to zero. A fixed IP addresses that issue).
On the bottom of the Pi Node app, under the Block number, I have a whole new thing saying Unable to reach the network, provide IP/Domain of known node and an alt URL. This is getting way over my head. Any suggestions?
That's where my issue started. To get rid of that I removed blockchain data, rebooted. Since then I've had the issue in this thread.
"Catching up to ledger 16002111: downloading and verifying buckets: 19/21 (90%)"
My Node is also stuck at this status. It however is at ledger 16009919 currently.
Found this in my container's stellar-core-stdout---supervisor-eM_RyD.log
file, mean anything to anyone?
2024-04-20T10:33:35.258 GDSJI [Process WARNING] process 61286 exited 22: curl -sf https://history.testnet.minepi.com/bucket/9f/a7/8e/bucket-9fa78e2a70fa92e1193f51ceed15b82fff422d95777c0cebeb021a204bb4a5ff.xdr.gz -o buckets/tmp/catchup-6ca6b86f1b852f4b/bucket/9f/a7/8e/bucket-9fa78e2a70fa92e1193f51ceed15b82fff422d95777c0cebeb021a204bb4a5ff.xdr.gz.tmp
2024-04-20T10:33:35.258 GDSJI [History ERROR] Could not download file: archive cache maybe missing file bucket/9f/a7/8e/bucket-9fa78e2a70fa92e1193f51ceed15b82fff422d95777c0cebeb021a204bb4a5ff.xdr.gz
Also getting a ton of these messages in the same file:
2024-04-20T10:40:06.756 GDSJI [Overlay INFO] Non preferred outbound authenticated peer 82.77.208.75:31402 rejected because all available slots are taken.
2024-04-20T10:40:06.756 GDSJI [Overlay INFO] If you wish to allow for more outbound connections, please update your configuration file
2024-04-20T10:40:06.756 GDSJI [Overlay INFO] Dropping peer 82.77.208.75:31402, reason peer rejected
Normal… happens all the time.
Sent from my iPhone
On 20 Apr 2024, at 17:35, William Trelawny @.***> wrote:
Found this in my container's stellar-core-stdout---supervisor-eM_RyD.log file, mean anything to anyone?
2024-04-20T10:33:35.258 GDSJI [Process WARNING] process 61286 exited 22: curl -sf https://history.testnet.minepi.com/bucket/9f/a7/8e/bucket-9fa78e2a70fa92e1193f51ceed15b82fff422d95777c0cebeb021a204bb4a5ff.xdr.gz -o buckets/tmp/catchup-6ca6b86f1b852f4b/bucket/9f/a7/8e/bucket-9fa78e2a70fa92e1193f51ceed15b82fff422d95777c0cebeb021a204bb4a5ff.xdr.gz.tmp 2024-04-20T10:33:35.258 GDSJI [History ERROR] Could not download file: archive cache maybe missing file bucket/9f/a7/8e/bucket-9fa78e2a70fa92e1193f51ceed15b82fff422d95777c0cebeb021a204bb4a5ff.xdr.gz
— Reply to this email directly, view it on GitHubhttps://github.com/pi-node/instructions/issues/345#issuecomment-2067633191, or unsubscribehttps://github.com/notifications/unsubscribe-auth/AP7E2JFUM6CHM3VPJNA5BP3Y6JAHXAVCNFSM6AAAAABE2DXGBWVHI2DSMVQWIX3LMV43OSLTON2WKQ3PNVWWK3TUHMZDANRXGYZTGMJZGE. You are receiving this because you were mentioned.Message ID: @.***>
still not working, not getting credit for mining even though Docker is running great
Hello every1, just sign in to say that same issue with me last few weeks. i tried everything but still local block number 1 and Catching up to ledger 16053247: downloading and verifying buckets: 19/21 (90%).
i did even upgrade to windows 11pro and docker pro lol :D
i kind of give up coz core team doesn't response at all and also i get muted by dick mods often coz i ask questions lol.
anyway when pi is listing and worth that's the question. Pc energy is also cost for nodes especially.
Are we in dream or will worth for it when we all need some miracle :)
Thank you guys who can help to fix Sync node ...
Found this in my container's
stellar-core-stdout---supervisor-eM_RyD.log
file, mean anything to anyone?2024-04-20T10:33:35.258 GDSJI [Process WARNING] process 61286 exited 22: curl -sf https://history.testnet.minepi.com/bucket/9f/a7/8e/bucket-9fa78e2a70fa92e1193f51ceed15b82fff422d95777c0cebeb021a204bb4a5ff.xdr.gz -o buckets/tmp/catchup-6ca6b86f1b852f4b/bucket/9f/a7/8e/bucket-9fa78e2a70fa92e1193f51ceed15b82fff422d95777c0cebeb021a204bb4a5ff.xdr.gz.tmp 2024-04-20T10:33:35.258 GDSJI [History ERROR] Could not download file: archive cache maybe missing file bucket/9f/a7/8e/bucket-9fa78e2a70fa92e1193f51ceed15b82fff422d95777c0cebeb021a204bb4a5ff.xdr.gz
Also getting a ton of these messages in the same file:
2024-04-20T10:40:06.756 GDSJI [Overlay INFO] Non preferred outbound authenticated peer 82.77.208.75:31402 rejected because all available slots are taken. 2024-04-20T10:40:06.756 GDSJI [Overlay INFO] If you wish to allow for more outbound connections, please update your configuration file 2024-04-20T10:40:06.756 GDSJI [Overlay INFO] Dropping peer 82.77.208.75:31402, reason peer rejected
I too have had this issue for a while now. I've tried Starting up the container only but that didn't help either. I have had the command prompt open (as administrator) and have been regularly running the following line for info on the container.
docker exec -it pi-consensus bash -c "stellar-core http-command info"
The one common thing I am noticing is the following line near the end:
"Catching up to ledger 16002111: downloading and verifying buckets: 19/21 (90%)"
The ledger number constantly changes but it seems to be stuck at 19/21 buckets for me. I would be interested to know if anyone else can check and see if they have the same. It seems there is a problem downloading and verifying the buckets. I have used different versions of Docker Desktop including the latest and 4.19. Also different versions of the PI app. the current one is 0.4.10. All with the same results.
All ports 31400 to 31409 are open and I have tested these externally too. I use a VPN with port forwarding and a fixed IP.
I am eager to find the solution to this. Perhaps my input can shed some light on the issue,
Up.
Have you find solution?
i did much more then you and still same shit so :D:D:D
@aviscoreTH it worked!!! thank you so much now showinmg synched on Win10pro w/ newest pi node app 4.10 and newest docker 4.29
hey what is solution? steps and versions plz. still ok btw?
same issue on Mac and Windows... docker versions 4.28.0 and 4.20.1. Clean install on Windows 10 pro.
ok... the container is running just fine on its own. Kill the app and don't turn off the switch and all is good... will sync and stay synched. Debug shows that something triggers the purge of all temp files periodically and the chain will never sync.
what you mean kill the app? which app and how to kill bro? more clear plz.
Found this in my container's
stellar-core-stdout---supervisor-eM_RyD.log
file, mean anything to anyone?2024-04-20T10:33:35.258 GDSJI [Process WARNING] process 61286 exited 22: curl -sf https://history.testnet.minepi.com/bucket/9f/a7/8e/bucket-9fa78e2a70fa92e1193f51ceed15b82fff422d95777c0cebeb021a204bb4a5ff.xdr.gz -o buckets/tmp/catchup-6ca6b86f1b852f4b/bucket/9f/a7/8e/bucket-9fa78e2a70fa92e1193f51ceed15b82fff422d95777c0cebeb021a204bb4a5ff.xdr.gz.tmp 2024-04-20T10:33:35.258 GDSJI [History ERROR] Could not download file: archive cache maybe missing file bucket/9f/a7/8e/bucket-9fa78e2a70fa92e1193f51ceed15b82fff422d95777c0cebeb021a204bb4a5ff.xdr.gz
Also getting a ton of these messages in the same file:
2024-04-20T10:40:06.756 GDSJI [Overlay INFO] Non preferred outbound authenticated peer 82.77.208.75:31402 rejected because all available slots are taken. 2024-04-20T10:40:06.756 GDSJI [Overlay INFO] If you wish to allow for more outbound connections, please update your configuration file 2024-04-20T10:40:06.756 GDSJI [Overlay INFO] Dropping peer 82.77.208.75:31402, reason peer rejected
After your post I started monitoring the logs you mentioned. Namely those at:
c:\users[USERNAME]\AppData\Roaming\PI Network\docker_volumes\supervisor_logs\
with names starting stellar-core-stdout---supervisor
What I've noticed is that the logging is quite high creating a 51mb log file every 12 hours. Each of these log files covers a period of about 12 hours. In each there are about 450 attempts to download the 19th bucket file that result in the lines you highlighted.
2024-04-20T10:33:35.258 GDSJI [Process WARNING] process 61286 exited 22: curl -sf https://history.testnet.minepi.com/bucket/9f/a7/8e/bucket-9fa78e2a70fa92e1193f51ceed15b82fff422d95777c0cebeb021a204bb4a5ff.xdr.gz -o buckets/tmp/catchup-6ca6b86f1b852f4b/bucket/9f/a7/8e/bucket-9fa78e2a70fa92e1193f51ceed15b82fff422d95777c0cebeb021a204bb4a5ff.xdr.gz.tmp 2024-04-20T10:33:35.258 GDSJI [History ERROR] Could not download file: archive cache maybe missing file bucket/9f/a7/8e/bucket-9fa78e2a70fa92e1193f51ceed15b82fff422d95777c0cebeb021a204bb4a5ff.xdr.gz
albeit with different process numbers obviously.
So we could leave the nodes on and hope that they fix the issue with bucket 19 or find some way to reach out to the Devs to address this issue and I've tried to find ways to do that without success. You would assume that they might monitor the Github info but who knows. They are a mysterious bunch which is quite a worry really.
The downside of leaving it and waiting are firstly, no node rewards. So we may ask ourselves if we are getting no reward for running a node and no help to remedy the situation then why bother wasting a good machine and electricity on running it when it could be repurposed for something that would (I personally have a decent dedicated node).
Secondly if we have faith and leave it in hope that they will fix it and our nodes start to catch up then those who are challenged with disk space might find theirs running out quicker that they wished. At the current rate, the block one nodes would accumulate logs scattered with bucket errors of 3gb per month!
Time to step up PI Devs. I fear there is no way we as simple node operators can make this work. What on earth is the PI ecosystem going to do when they want to launch Mainnet and move node operators over to Mainnet nodes when their Testnet ones are flawed. Could be catastrophic.
We've had suggestions from other node operators and we've tried them all. All you node operators that claim to be running smoothly with recent local block numbers I'm glad for you, but I feel for all those that have had this issue for a while now without sight of resolution.
Here's my take on the situation. After looking at the supervisor logs I noticed that there are many failures in downloading most of the buckets. This tells me the web server(s) are very busy and have reached max connections. That's why it works sometimes and not others with the lower buckets. The way the history buckets are formed as I understand it is that each one is larger in orders of magnitude than the previous one. This would make each subsequent bucket have a longer download time. Now if a web server is inundated with requests then it can slow things down resulting in processes hitting web server and proxy timeouts which in turn would lead to a failure in downloading a large file. This seems to map out in the log because the errors seems to increase as the buckets increase. The common total failure point for nearly all of us is at bucket 19. This would mean that bucket 20 and 21 will be much larger. So minor adjustments to timeouts will just not do. There needs to be some major adjustment or upgrade or the addition of multiple servers.
This issue is probably further compounded by the fact that more and more nodes are being created on a daily basis. All of these requesting the buckets from the 3 testnet nodes as I understand. To add to this, all existing nodes that have cleared their data are repeatedly requesting bucket 19 over 900 times a day each. Probably downloading a section of it and hitting a timeout due to weight of traffic. Critical mass comes to mind.
I think this is an issue that only the Devs can address. How might they do this?
Perhaps by some extra Testnet nodes and adjusting web server timeouts (and proxy). Maybe by using different download techniques that create persistent connections that renew until the package has been delivered?
What is evident to me is that if we are stuck on 19 then there are much bigger capacity issues because we haven't even got to 20 and 21 and they are much larger. Not to mention that as the chain grows, then so will the buckets.
I have been a full stack developer for a number of years and have come across issues such as this where traffic cascades due to failure to create a bigger problem overall. So finding a way to stop downloads failing will go a long way to alleviating the problem because it will allow all current and new nodes to actually catch up and therefore no longer tax the queue.
I think you are damn near on target with why we’re having all the problems, and it truly makes sense. The servers are probably overloaded and they need to grow their server base to support all the new traffic.
Pi Not Not Working bucket 19/21 and Local block number 1 is fucking nightmare and for ever.
Catching up to ledger 16069823: downloading and verifying buckets: 19/21 (90%)
I did tried every node and docker version with full clean instal last 3 weeks. core etc, json , time trick, bla bla bla nothing works...
Node bonus 0 so are we wasting our pc and energy and time and and...
When we can sell, ohh listing is still mysteries after 5 years :(
They mutes you fast when you talk this kind of things in chat or ask for help about node issues. core team doesn't response any mails either.
So are we alone ?
A rough calculation based on conservative estimates of the data strain could be as follows. There are reportedly well over 100,000 nodes. So let's say 1 percent of them has this issue. Let's round it down to 1,000 nodes. Looking at the download stats in docker it takes about 220mb to get to bucket 18 so let's make a low assumption on bucket 19 of 20mb (it's probably a lot larger. 1,000 nodes 20mb 900 attempts would come to a maximum daily request of 18 Terrabytes. Of course virtually all will fail before that figure is reached so let's halve it to 9tb. That's still an awful lot of data for 3 servers in one day and as I stated, this is a very conservative estimate.
One possible solution I thought they could employ to help clear the backlog without any major upgrades would be for them to create some kind of queue system. Maybe based on the current block number.
At the moment the Block 1'ers I'll call us request the bucket every 2 or 3 minutes. Implement a system based on the size of the queue and the size of the bucket. For example. If there are 10 current requests for a bucket and the bucket is 50MB then use those 2 figures to calculate a future block number say 0.3 for position in queue and 1 for each MB in size. In this instance it would equate to 150 blocks ((0.3 x 10) x (50 x 1)). Add that number to the current block to get a future block then instruct the node not to request the bucket again until that future block number is reached. This example would be 150 blocks into the future. So instead of requesting the bucket in 2 minutes, it would wait about 20 minutes, therefore reducing the request amount for that node in a 20 minute from 10 requests down to 1. Easing the data strain. Of course these numbers would probably need to be adjusted but it's just an example to show how to greatly reduce requests with the current setup by spreading them out over time whilst still serving all nodes successfully. If it fails again then their position in the queue will most likely be higher and the next request will be within 20 minutes.
Obviously this will make catching up slower if a larger number of nodes reset or if there is a major node update but it will ensure that all nodes will eventually catch up without bringing the system to a standstill. Even if it takes a little longer and I'm sure we would all be quite happy to wait a day longer if it meant it would work.
Also using that calculation above would mean the queue would adapt to incoming requests. If a build up occurs then it's just a simple matter of adjusting the variables to extend the wait time.
I hope you are watching PI Devs.
i have had exactly the same problem tried everything i think it is a server problem unless someone can solve it all we can do is leave it running and hope they fix the server problem downloading and verifying buckets: 19/21 (90%) ?
Strange- I just checked my Pi app and it says my Node bonus is non-zero. So I must still be getting credit for running the Node despite it hanging at 19 blocks.
@aviscoreTH it worked!!! thank you so much now showinmg synched on Win10pro w/ newest pi node app 4.10 and newest docker 4.29
hey what is solution? steps and versions plz. still ok btw?
For me it was very straight forward so may not be the same issue you are experiencing with the 19/21 90% message. On my node I run a vps server with static ip and a vpn connection to it which gets me around the port forwarding restrictions of my isp. Once I had thatsetup correctly my blockchain would not synch but was stuck at 1. So after consulting with node chat in pi developer section I had updated to the newest first docker 4.29 and then pinode app 4.10 neither which fixed my problem of being stuck at block 1. So after coming back here and seeing the suggestion I did as follows: In the node app there is switch for turning on the blockchain which becomes pink color when turned on. I left t5his switch on so color pink and then went to my taskbar found the pi app icon and right clicked it and closed it. My docker container was open and i left it open and left pi node app closed but with switch on overnight. When I returned in the morning then opened the pi node app via icon on desktop my blockchain was fully in sync and yes it remains in sync. Since then they just came out with 4.11 node app so i followed instructions to update my node only once again my blockchain would not catch up since i had stopped the 4.10 version to install this new one it had fallen a few blocks behind. As I watched it the chain kept adding blocks but my chain was stuck on the number i had stopped it at. So knowing the fix i did the same as before only closing the node app for twenty minutes since it was not too far behind the current block. When I reopened the pi node app was in sync again and remains in sync now. I hope at least this clears up what was done and hopefully can help someone. there are some much older issues here that deal with catching up and syncing the blockchain perhaps one of them could help?? I also saw one from reddit pinetwork sub that I copied and pasted here for maybe would help? idk good luck guys:
How to fix node problem - can confirm it works
find the config stellar-core.cfg file at
C:\Users{your user name folder}\AppData\Roaming\Pi Network\docker_volumes\stellar\core\etc
open this file, and edit this content at the end of the file,then restart your computer and start the node, it will be catch up the new block.
[HISTORY.cache]
get="curl -sf 161.35.227.224:31403/{0} -o {1}"
@aviscoreTH it worked!!! thank you so much now showinmg synched on Win10pro w/ newest pi node app 4.10 and newest docker 4.29
hey what is solution? steps and versions plz. still ok btw?
For me it was very straight forward so may not be the same issue you are experiencing with the 19/21 90% message. On my node I run a vps server with static ip and a vpn connection to it which gets me around the port forwarding restrictions of my isp. Once I had thatsetup correctly my blockchain would not synch but was stuck at 1. So after consulting with node chat in pi developer section I had updated to the newest first docker 4.29 and then pinode app 4.10 neither which fixed my problem of being stuck at block 1. So after coming back here and seeing the suggestion I did as follows: In the node app there is switch for turning on the blockchain which becomes pink color when turned on. I left t5his switch on so color pink and then went to my taskbar found the pi app icon and right clicked it and closed it. My docker container was open and i left it open and left pi node app closed but with switch on overnight. When I returned in the morning then opened the pi node app via icon on desktop my blockchain was fully in sync and yes it remains in sync. Since then they just came out with 4.11 node app so i followed instructions to update my node only once again my blockchain would not catch up since i had stopped the 4.10 version to install this new one it had fallen a few blocks behind. As I watched it the chain kept adding blocks but my chain was stuck on the number i had stopped it at. So knowing the fix i did the same as before only closing the node app for twenty minutes since it was not too far behind the current block. When I reopened the pi node app was in sync again and remains in sync now. I hope at least this clears up what was done and hopefully can help someone. there are some much older issues here that deal with catching up and syncing the blockchain perhaps one of them could help?? I also saw one from reddit pinetwork sub that I copied and pasted here for maybe would help? idk good luck guys:
How to fix node problem - can confirm it works
find the config stellar-core.cfg file at
C:\Users{your user name folder}\AppData\Roaming\Pi Network\docker_volumes\stellar\core\etc
open this file, and edit this content at the end of the file,then restart your computer and start the node, it will be catch up the new block.
[HISTORY.cache]
get="curl -sf https://history.testnet.minepi.com/{0} -o {1}"
get="curl -sf 161.35.227.224:31403/{0} -o {1}"
That's a pretty comprehensive reply and unfortunately, all of the suggestions you have pointed out have already been tried without any success. I note that you said you have a VPS. Is this VPS off site and do you know the download/upload speeds. If they are quite high (as they would normally be on a service that provides VPS setup) then that would fit in with my idea that tose with lower speeds are timing out. I am currently running mine at home. The machine spec is pretty good. Quad core (3.2g pre processor). 2TB NVME 7GBPS. 32GB DDR4 High speed RAM. Dedicated graphics card. The machine is dedicated to the Node only so is overkill, but I hope to migrate it to an open mainnet node in the future when that happens. My internal network is 1GB. My internet speeds are roughly 38Mb P/S down and about 7Mbps up. This is where I think the issue may be for most of us that have a home node. If the download speed is not sufficient then timeouts are likely to occur and the process for that bucket will start over, putting undue stress on the testnet nodes. That is only my guess though but it it seems to be the only thing that can explain why some work and others don't. Yes, please tell me the up and download speeds your VPS can achieve. The suggestion about the different IP address in the config file lead to my startup not even getting to download any of the history buckets. I tried what you said about closing the process from the taskbar rather than clicking the 'X' in the top right corner. Still no success for me. All goo suggestions are welcome I think because the Dev team don't seem to be coming up with anything.
What I do find interesting is that since we've been talking about buckets, they have magically appeared on the Node status page in the PI App above the block number in version 0.4.11. I would urge everyone to keep up to the latest version. I only found out about this version through the previous comment because the PI site still states 0,4,10 but when you go to download it it it 0.4.11. If we all update to it we can see the bucket progress without having to look in the log files.
Internet speed test 015102050100+ 18.4 Megabits per second Testing upload… 67.9
Mbps download
18.4
Mbps upload
Latency: 24 ms Server: Dublin Your Internet connection is very fast.
Your Internet connection should be able to handle multiple devices streaming HD videos, video conferencing and gaming at the same time.
LEARN MORE, i have exactly same prob as you, i dont think its the internet speed that would only slow it down, i have also tried everything, im just going to leave mine up with latest node version and docker and see if its just going to take time unless someone can finally solve the issue ?
Just trying to get as much info in the thread as possible in the hope that the PI Devs are monitoring it. I do think the issue may well have something to do with data overload. Although we have over 100,000 reported active nodes, the history buckets only seems to be retrieved from the 3 testnet nodes. The higher the bucket number the larger the file. So imagine this. The buckets are requested using an http request so it's safe to assume that the testnet servers are running web server sofware among other types of software, Let's assume they are using the most popular web server 'Apache 2.0'. This web server by default has a limit of 500 connections. Now let's assume that 1% of PI nodes are facing this issue. That would equate to 1,000 nodes. Already we could have run into a problem here because we can assume that the three testnets have a 1500 connection limit. Catchup traffic is only a proportion of testnet traffic so it's inevitable that some connections will be immediately refused. Let's assume that bucket 19 is 100MB in size. Multiply that by 1,000 hungry nodes you end up with a near simultaneous request of 100GB. This is a big ask and is going to result in requests timing out. Timeouts set on the web servers to protect the servers from basically crashing under the weight of request and demand for high traffic, The big downside to all this is that a vast majority of those hungry nodes requesting their bucket will timeout. They may have downloaded 50, 60, 70 or more MB of the 100MB but it counts for nothing because it's the whole bucket or nothing. So the testnet servers are buckling under the weight of unnecessary high traffic throughput because 1,000 nodes are unsuccessfully downloading a proportion of the bucket for nothing. Then doing it again and again and getting stuck in a loop. This is why I made the suggestion to create a queuing system for those hungry nodes so that they could basically form an orderly queue for the bandwidth. It might make the catch up time longer but it would ensure that it would happen by spreading out traffic requests to the testnet devices.
https://pi-blockchain.net/dashboard/testnet this says there are 290,587 nodes and 227,450 active nodes ?
https://pi-blockchain.net/dashboard/testnet this says there are 290,587 nodes and 227,450 active nodes ?
Not sure who owns that domain but the ledger number doesn't seem to be changing and is quite out of date by almost 2 million blocks. However if that data was correct at when the ledger was at that block number then you can see that my numbers are grossly underestimated and the 1% would easily exceed the connection limit for 3 servers.
A calming system of some kind needs to be implemented because there may be a future occasion where there would be a requirement for an emergency node update that would result in a majority of nodes updating at the same time. If nothing is done about this then chaos could ensue. Wouldn't look very good for PI if that were to happen during open mainnet.
What I do find interesting is that since we've been talking about buckets, they have magically appeared on the Node status page in the PI App above the block number in version 0.4.11. I would urge everyone to keep up to the latest version. I only found out about this version through the previous comment because the PI site still states 0,4,10 but when you go to download it it it 0.4.11.
I did a complete destroy and reinstall and noticed this block count and the version discrepancy too. Thought it was just something I didn't notice before. Good (and indeed quite interesting) to hear it's not just me.
By all accounts this entire repo seems to be dead since it's technically only for the Pi Node Instructions themselves, not the Pi Node code. I opened a support ticket with them since that seems to be a more "official" way of addressing non-documentation issues, and there doesn't seem to be a repo for raising code issues like this one.
Which makes me highly skeptical this thread or this repo is being monitored at all. But then again, those 2 little changes snuck in there at the same time we're having these discussions, so who knows...
Appreciate you smart folks taking the time on this at least! My expertise is only in networking and containers, not blockchain servers.
@aviscoreTH it worked!!! thank you so much now showinmg synched on Win10pro w/ newest pi node app 4.10 and newest docker 4.29
hey what is solution? steps and versions plz. still ok btw?
For me it was very straight forward so may not be the same issue you are experiencing with the 19/21 90% message. On my node I run a vps server with static ip and a vpn connection to it which gets me around the port forwarding restrictions of my isp. Once I had thatsetup correctly my blockchain would not synch but was stuck at 1. So after consulting with node chat in pi developer section I had updated to the newest first docker 4.29 and then pinode app 4.10 neither which fixed my problem of being stuck at block 1. So after coming back here and seeing the suggestion I did as follows: In the node app there is switch for turning on the blockchain which becomes pink color when turned on. I left t5his switch on so color pink and then went to my taskbar found the pi app icon and right clicked it and closed it. My docker container was open and i left it open and left pi node app closed but with switch on overnight. When I returned in the morning then opened the pi node app via icon on desktop my blockchain was fully in sync and yes it remains in sync. Since then they just came out with 4.11 node app so i followed instructions to update my node only once again my blockchain would not catch up since i had stopped the 4.10 version to install this new one it had fallen a few blocks behind. As I watched it the chain kept adding blocks but my chain was stuck on the number i had stopped it at. So knowing the fix i did the same as before only closing the node app for twenty minutes since it was not too far behind the current block. When I reopened the pi node app was in sync again and remains in sync now. I hope at least this clears up what was done and hopefully can help someone. there are some much older issues here that deal with catching up and syncing the blockchain perhaps one of them could help?? I also saw one from reddit pinetwork sub that I copied and pasted here for maybe would help? idk good luck guys:
How to fix node problem - can confirm it works
find the config stellar-core.cfg file at
C:\Users{your user name folder}\AppData\Roaming\Pi Network\docker_volumes\stellar\core\etc
open this file, and edit this content at the end of the file,then restart your computer and start the node, it will be catch up the new block.
[HISTORY.cache]
get="curl -sf https://history.testnet.minepi.com/{0} -o {1}"
get="curl -sf 161.35.227.224:31403/{0} -o {1}"
Hi can you explain full spteps ? restart computer/ remove blockchain date ? or not? remove docker containers? so much this kind of questions must be importand in this trick if it real..
or just kill Pi Network app from running taskbar ? and after 20 minutes its syncs..
and which windows what is your pc where you from maybe everything makes the magic :):):)
because last 3 months i had enought trying to sync and seeing local block number 1... and nor fucking bucket 19/21 :)
node bonus sometimes is most of time time nothing and i couldnt understand many years how this works :D
So come on lets fix this or some1 explains if all ok or what the fuck is this
or just Kill Pi node from taskbar?
Zip68 you are ahead of me ive learnt to leave node up if no probs and give it time to sync yesterday you had 8 outgoing connections and 16 incoming which is good better than mine at present 8 outgoing 2 incoming but after a docker update and pi node version update takes time to sync in iself > did leaving the most up to date docker and pi node version help you get to sync your node, something to take into account when researching probs is this if its old the old solution we see can itself be out dated, im thinking keep it up to date and let the software do its thing given time ?
Zip68 you are ahead of me ive learnt to leave node up if no probs and give it time to sync yesterday you had 8 outgoing connections and 16 incoming which is good better than mine at present 8 outgoing 2 incoming but after a docker update and pi node version update takes time to sync in iself > did leaving the most up to date docker and pi node version help you get to sync your node, something to take into account when researching probs is this if its old the old solution we see can itself be out dated, im thinking keep it up to date and let the software do its thing given time ?
how long time need to be synced? when some people says 30 min some sayd 2 hours 4 hours some sayd 1 day some 4 days. most like me never last 3 months... so must be issue right
I understand m8 ive been trying to sync my node for last 2 to 3 weeks but even in the last few weeks docker has updated so has pi node version and more than that in last few days when we say stop node and start again we are starting from scratch, ive been trying to sync last few weeks like you done everything but i think it takes time especially that last 10% something you are doing well shows you have loads of incoming outgoing connectins so say do what im going to do leave it up a week to do its thing and if no concrete sollution comes up in a weeks time look at it from there, ive been like you when restarting takes about 5 mins to get to 90% but its a dream to get to 100% ?
I understand m8 ive been trying to sync my node for last 2 to 3 weeks but even in the last few weeks docker has updated so has pi node version and more than that in last few days when we say stop node and start again we are starting from scratch, ive been trying to sync last few weeks like you done everything but i think it takes time especially that last 10% something you are doing well shows you have loads of incoming outgoing connectins so say do what im going to do leave it up a week to do its thing and if no concrete sollution comes up in a weeks time look at it from there, ive been like you when restarting takes about 5 mins to get to 90% but its a dream to get to 100% ?
idk but my laptop crashing once 1 day or randomly in few days and i need to force off then restart. so for me sync 100% is dream :) And today 2 3 incoming calls only and still block nr 1 even i tried last magics :(
Let me know updates please in your side. ..
Strange- I just checked my Pi app and it says my Node bonus is non-zero. So I must still be getting credit for running the Node despite it hanging at 19 blocks.
I am experiencing the same now. The Node is still stuck at bucket 19/21, however I do receive a Node Bonus today. It actually is only 0.01 :/ but at least something is happening.
Since update to Node version 0.4.11 I've also noticed I'm getting bonuses but I've suffered quite an uptime drop due to all this mess, which has lead to a drop in the bonus. It does seem to be increasing day by day though. This should reflect in a node bonus increase.
I'm not connected with the Devs in any way but my advice to everyone would be to update to the latest docker. The latest pi node software too. Start it up and just leave it running. with 0.4.11 you will see the bucket loading status on the PI app without having to look at the logs. Check regularly that the 'Latest block number' is changing frequently. This means that the the node is connected reading from the remote nodes. Don't worry too much about the 'Local block number' remaining at 1. By examining the logs we can see that the Node is making hundreds of attempts a day to retrieve the required buckets so eventually, when they sort out this issue, the buckets will download without any intervention from us. The most important thing for all of us is getting our Node bonuses. So set it all up. Let it run for a few days and see if you start getting the bonus again. Let everyone know here.
Even if we all do start receiving our Node bonuses I think it's really important for us to keep checking in every now and again with comments on whether the buckets are still stuck and whether there has been any movement with the local block number.
That way any PI Devs they might possibly read this thread will be getting feedback on any fixes they might apply.
Latest Docker and Node will solve the issue.
On 30 Apr 2024, at 17:09, jeffstunes @.***> wrote:
Since update to Node version 0.4.11 I've also noticed I'm getting bonuses but I've suffered quite an uptime drop due to all this mess, which has lead to a drop in the bonus. It does seem to be increasing day by day though. This should reflect in a node bonus increase.
I'm not connected with the Devs in any way but my advice to everyone would be to update to the latest docker. The latest pi node software too. Start it up and just leave it running. with 0.4.11 you will see the bucket loading status on the PI app without having to look at the logs. Check regularly that the 'Latest block number' is changing frequently. This means that the the node is connected reading from the remote nodes. Don't worry too much about the 'Local block number' remaining at 1. By examining the logs we can see that the Node is making hundreds of attempts a day to retrieve the required buckets so eventually, when they sort out this issue, the buckets will download without any intervention from us. The most important thing for all of us is getting our Node bonuses. So set it all up. Let it run for a few days and see if you start getting the bonus again. Let everyone know here.
Even if we all do start receiving our Node bonuses I think it's really important for us to keep checking in every now and again with comments on whether the buckets are still stuck and whether there has been any movement with the local block number.
That way any PI Devs they might possibly read this thread will be getting feedback on any fixes they might apply.
— Reply to this email directly, view it on GitHubhttps://github.com/pi-node/instructions/issues/345#issuecomment-2084902210, or unsubscribehttps://github.com/notifications/unsubscribe-auth/AP7E2JEUDRJ4ZAKKQHOPFY3Y75UXPAVCNFSM6AAAAABE2DXGBWVHI2DSMVQWIX3LMV43OSLTON2WKQ3PNVWWK3TUHMZDAOBUHEYDEMRRGA. You are receiving this because you were mentioned.Message ID: @.***>
Strange- I just checked my Pi app and it says my Node bonus is non-zero. So I must still be getting credit for running the Node despite it hanging at 19 blocks.
I am experiencing the same now. The Node is still stuck at bucket 19/21, however I do receive a Node Bonus today. It actually is only 0.01 :/ but at least something is happening.
Kept it running overnight and my Node Bonus significantly increased today!
Here's an update on my node.
Still running the node continuously. Local block still stuck at 1. Buckets still stuck at 19/21. (Node version 0.4.11) Latest block number is updating. The block number shown 'Catching up to ledger' is updating less frequently but is updating (Node version 0.4.11) Uptime seems to be increasing daily. I am receiving a node bonus daily but it is decreasing slightly nearly every day.
Hi all,
Currently facing below issue, with docker descktop version 4.28.0 (139021) and Pi Node version 0.4.9, ports oppened correctly with all the checks done:
When running the blockchain I allways get stuck in local block 1 and never reach the latest block until the container stops and restarts on and on again.
when troubleshooting State allways between joining SCP and Catching Up and the rest as in the screenshot below.
Pls help, I have tried everything but it wont work.