Closed Turosik closed 5 years ago
How can we help? Any instructions to collect debug data?
How can we help? Any instructions to collect debug data?
Hi Turosik, can you help us collect some memory information? Attach to pchain's console and run debug.writeMemProfile()
by every 2~4 hours or the time you are available, and send the output files to us? The output file will be create under the currect dir you run the attach command.
e.g first time I will run debug.writeMemProfile("memory0") after several hours I run debug.writeMemProfile("memory1") and repeat this serveral times
thank you!
We are monitoring the memory usage on EC2 closely as well, so far found some hint but not having too much progress, we will continue to work on this
memory 0 memory0.zip
memory 1 memory1.zip
memory 2 memory2.zip
memory 3 memory3.zip
memory 4 memory4.zip
memory 5 memory5.zip
memory 6 memory usage is over 90% now, so I'm going to restart process manually before it will crash... it's syncing very long after crash. memory6.zip
Thanks for your dump files, we have identified the issue and fixed in release 1.0.24 last week. We have monitored our aws node over the weekend and memory usage looks normal, please update the new version on your side as well see if can address your issue.
check10-20190705-0750.zip check11-20190705-1720.zip check12-20190706-0059.zip check13-20190706-1602.zip check14-20190707-0833.zip
Above are from a recent single run - unfortunately zapped when when VPS provider decided to schedule some maintenance.
I have a node on AWS EC2 with recommended configuration: 8Gb RAM pchain process is living for 36-48 hours progressively taking more and more RAM and finally getting out of memory and crashing also it doesn't want to start without full instance reboot