Closed sulliwane closed 4 years ago
Also receiving this issue. Geth slowly eats up all available ram on the system until it crashes.
Edit: Ended up increasing the server ram to 8gb, things are running okay.
Having the same issue, built from the latest 1.8.3 source, Linux, RAM 2 GB. Sync mode fast, Ropsten network. Is it possible to limit geth's RAM usage instead of increasing the server RAM?
UPD: adding a 8 GB swap file didn't help. Same behaviour: eats up all available memory and dies.
Also, has this problem appeared only in 1.8.2? Will reverting to 1.8.1 or 1.8.0 help?
Hey Sapph1re, I dont think there's anyway to limit geth's ram usage at the current moment. Also not too sure if reverting to an older version will help. Although this being said, after upgrading to 8gb of ram my geth has not crashed and it's ram usage seems to have stabilized around 4gb.
I've included the systemd service file that i'm using to launch geth, it may be of use to you:
[Unit]
Description=The Ethereum Blockchain Command Line Interface
Documentation=man:geth(1)
After=network.target
[Service]
User=ethereum
ExecStart=/usr/bin/geth --syncmode "fast" --cache 1024 --rpcaddr "0.0.0.0" --rpcport 9551 --rpc --rpcapi personal,eth,ssh,web3 --rpccorsdomain "52.224.54.206" 2>&1 >> /home/ethereum/geth.l$
Restart=always
[Install]
WantedBy=default.target
Server specifications: Ubuntu 16.04.4 x64 8 GB Memory 160 GB SSD Disk 4 vCPU's digitalocean.com droplet
Setting the --cache
flag is supposed to do it but there is either a mem-leak or it's not respecting it. We've also noticed this both in fast-sync mode and proper full-sync mode on mainnet and ropsten testnet.
In general we've noticed that when syncing Geth will eat up as much memory as there is available on the machine. I don't think this an issue with just v1.8.2. We've seen it all the back to v1.7.x.
Once in sync the memory and CPU profile of Geth drastically decreases but it spikes every time a block is verified (for obvious reasons). So you need to give it quite a bit more than your --cache
value. In, general we've been seeing things stabilizing at whatever your cache
value is + 3GB.
I am having an "out of memory" issue with Geth 1.8.9 that is not triggered by fast sync, but just by running Geth. Whenever I start Geth, if I let it run, after a while, it will crash with an "out of memory" error message. My Ubutun Xenial server is configured with 8GB of RAM and 32GB of swap. The command line to start Geth is the following: go-ethereum/build/bin/geth --nodiscover --cache=2048 --rpc --rpcaddr 0.0.0.0 --rpccorsdomain "*" --rpcapi "db,eth,net,web3,personal,debug,txpool" js ./ethereum/mine_pending.js
i used to run Geth 1.6.x with the same command and never had an issue with it running out of memory. I have actually tripled the swap space on this server to accommodate the "out of memory" issue and it is still there.
Yeah. Running more nodes, this is becoming a real problem. Both in ropsten testnet and mainnet. There is a memory leak somewhere. Geth continually eats up memory until it runs out of machine memory.
Seeing this in v1.8.2 and later versions. But we haven't been able to upgrade passed v1.8.2 due to https://github.com/ethereum/go-ethereum/issues/16846
We're running on Kubernetes so this is particularly painful because our Geth nodes continually get restarted.
I should probably have mentioned that I'm running 3 geth dockers on one single machine (Mainnet, Ropsten, Rinkeby).
I notice that it crash after few hours only once I activate --rpcapi "admin,db,eth,miner,net,web3,personal,txpool"
Without that argument, geth is running fine on ropsten.
The failing setup:
cat docker-compose.yml
version: "3"
services:
geth_testnet:
container_name: geth_testnet
image: ethereum/client-go:v1.8.16
command: '--testnet --syncmode "fast" --rpc --rpcaddr 0.0.0.0 --rpccorsdomain "*" --rpcvhosts "*" --rpcapi "admin,db,eth,miner,net,web3,personal,txpool"'
ports:
- "8545:8545"
- "30303:30303"
- "30303:30303/udp"
volumes:
- testnet_data:/root/.ethereum
volumes:
testnet_data:
Hope that's help!
I notice that it crash after few hours only once I activate
--rpcapi "admin,db,eth,miner,net,web3,personal,txpool"
Without that argument, geth is running fine on ropsten.The failing setup:
cat docker-compose.yml version: "3" services: geth_testnet: container_name: geth_testnet image: ethereum/client-go:v1.8.16 command: '--testnet --syncmode "fast" --rpc --rpcaddr 0.0.0.0 --rpccorsdomain "*" --rpcvhosts "*" --rpcapi "admin,db,eth,miner,net,web3,personal,txpool"' ports: - "8545:8545" - "30303:30303" - "30303:30303/udp" volumes: - testnet_data:/root/.ethereum volumes: testnet_data:
Hope that's help!
Ah thats interesting! I upgraded my node to 16gb of memory and geth has yet to crash on me anymore. The memory limit flag doesn't actually limit geth's memory usage it seems.
same issue. when i change from centos7.4 to ubuntu16.04 and change the rpcport from 8581 to 8681. it seems to be stoped! i think the rpcport may be the main case.
here is my specifications:
Server Ubuntu 16.04.4 x64 4 GB Memory 40 GB HDD Disk 2 vCPU's
Geth Version: 1.8.16-stable Architecture: amd64 Go Version: go1.11.1
you should be get a rpc attack.
At 2018-10-17 10:33:05, "eliceaas" notifications@github.com wrote:
same issue. when i change from centos7.4 to ubuntu16.04 and change the rpcport from 8581 to 8681. it seems to be stoped! i think the rpcport may be the main case.
here is my specifications:
Server Ubuntu 16.04.4 x64 4 GB Memory 40 GB HDD Disk 2 vCPU's
Geth Version: 1.8.16-stable Architecture: amd64 Go Version: go1.11.1
— You are receiving this because you are subscribed to this thread. Reply to this email directly, view it on GitHub, or mute the thread.
hi, guys, I also encountered this problem. But when I removed the "--rpc" config ,geth works well. I think the RPC service is the main problem, and hope someone can fix this.
The main problem is most likely attackers on the internet discovering the rpc service and doing bruteforce password guessing against personal.unlock. Luckily, decrypting keystore is very memory intensive.
Note: this doesn't seem to be the case for the original reporter, but very likely for those of you that experience problems only when rpc is enabled.
The main problem is most likely attackers on the internet discovering the rpc service and doing bruteforce password guessing against personal.unlock. Luckily, decrypting keystore is very memory intensive.
Note: this doesn't seem to be the case for the original reporter, but very likely for those of you that experience problems only when rpc is enabled.
hi, holiman I set up a firewall to proctect the RPC port, and this problem has disappeared. Thank you very much, and I hope this will help someone else who encountered this problem. 👍
@hashfury42 could you please share how did you setup your firewall on port, did you allow only particular ip to have access to that port?
@hashfury42 could you please share how did you setup your firewall on port, did you allow only particular ip to have access to that port?
@hashfury42 could you please share how did you setup your firewall on port, did you allow only particular ip to have access to that port?
Yes, only particular ip have access to that port, if you use Ubuntu you can read this https://help.ubuntu.com/community/UFW
@holiman thanks in my case this method helped me. I created a solution on: https://stackoverflow.com/questions/53206228/ethereum-geth-out-of-memory/ I hope it will be useful
Hi Alex, Greetings from Singapore and thanks alot for the information. I will try this out by tomorrow morning. one more question, if i want to enable for two ip addresses, do i need to repeat the iptable commands twice with different ip addresses.
regards Jain
On Thu, Nov 8, 2018 at 7:29 PM Alex notifications@github.com wrote:
@holiman https://github.com/holiman thanks in my case this method helped me. I created a solution on: https://stackoverflow.com/questions/53206228/ethereum-geth-out-of-memory/ http://url I hope it will be useful
— You are receiving this because you commented. Reply to this email directly, view it on GitHub https://github.com/ethereum/go-ethereum/issues/16377#issuecomment-436963379, or mute the thread https://github.com/notifications/unsubscribe-auth/AfcavuLJMbHbAj4CaFWtaTF3fdr7wF4Pks5utBWigaJpZM4S4X8s .
@NIVJAIN Hi Jain!
repeat the iptable commands twice with different ip
if I understand correctly, yes. However, I recommend reading about linux iptables. Your main task is to have strictly defined addresses access to your node on the port.
Provided access to node 2 ip addresses do not forget to specify this parameter: -- rpccorsdomain
--rpccorsdomain value Comma separated list of domains from which to accept cross origin requests (browser enforced)
please write about your results...
I'm not sure if it's same problem or not. I used to be troubled in same issue geth's "out of memory". so I expanded AWS Server spec and the problem was gone. but new issue came, CPU and MEM high usage.
The issue is it slowly increases to cpu and mem limit. it cause AWS Server paralysis
Server AWS t3.medium(64bit - ubuntu 16.04, 2core, 4GB Ram, 8GB Swap memory) version : 1.8.1-stable, 1.8.16-stable fast and light syncmode both.
geth --rinkeby --rpc --rpcaddr "0.0.0.0" --rpcvhosts=* --rpcport "8545" --rpccorsdomain "neojuneELB-1439772252.ap-northeast-2.elb.amazonaws.com" --rpcapi "eth,net,web3,personal,admin" --syncmode "light" --cache "64"
It happened with me right now on my private ethereum network. Is there any progress on this?
geth --rpc --rpcport "8545" --rpcaddr "127.0.0.1" --rpccorsdomain "*" --ws --wsorigins "*" --wsaddr "127.0.0.1" --wsport "8546" --rpcapi "web3,personal,eth,net" --wsapi personal,web3,eth,net,db --cache 2048 console
fatal error: runtime: out of memory
runtime stack:
runtime.throw(0x11ecc01, 0x16)
/build/ethereum-CazSBy/.go/src/runtime/panic.go:774 +0x72
runtime.sysMap(0xc194000000, 0x10000000, 0x20db3f8)
/build/ethereum-CazSBy/.go/src/runtime/mem_linux.go:169 +0xc5
runtime.(*mheap).sysAlloc(0x20a9840, 0x10000000, 0x0, 0x882b79)
/build/ethereum-CazSBy/.go/src/runtime/malloc.go:701 +0x1cd
runtime.(*mheap).grow(0x20a9840, 0x8000, 0x1ffffffff)
/build/ethereum-CazSBy/.go/src/runtime/mheap.go:1255 +0xa3
runtime.(*mheap).allocSpanLocked(0x20a9840, 0x8000, 0x20db408, 0x181b730)
/build/ethereum-CazSBy/.go/src/runtime/mheap.go:1170 +0x266
runtime.(*mheap).alloc_m(0x20a9840, 0x8000, 0x7f0084680101, 0x7f00846879a0)
/build/ethereum-CazSBy/.go/src/runtime/mheap.go:1022 +0xc2
runtime.(*mheap).alloc.func1()
/build/ethereum-CazSBy/.go/src/runtime/mheap.go:1093 +0x4c
runtime.(*mheap).alloc(0x20a9840, 0x8000, 0x7f0084010101, 0x7f007d7b4eb0)
/build/ethereum-CazSBy/.go/src/runtime/mheap.go:1092 +0x8a
runtime.largeAlloc(0x10000000, 0x7f007d7b0101, 0x46fabe)
This is a very old ticket, and we've worked on, and fixed many memory issues over time. I'm closing this, it's better if new tickets are opened relating to the more recent versions
Hi there,
I get a
fatal error: runtime: out of memory
after updating geth to 1.8.2 in my docker-compose file. These two screenshots are separated by 1:30 minute (attached is video) geth-out-of-memory.zip :After 1'30'' one can see that geth is constantly eating up memory
These are the cli args used to start geth :
geth --syncmode "fast" --testnet --ws --wsaddr "0.0.0.0" --wsorigins "*" --rpcvhosts my.domain --cache 512
Here is my server RAM:
Is that related to https://github.com/ethereum/go-ethereum/issues/16244, https://github.com/ethereum/go-ethereum/issues/16243 and https://github.com/ethereum/go-ethereum/issues/16174 ?
Many thanks for your help!
System information
Geth version:
1.8.2
OS & Version: Linux DockerBacktrace