Closed whyrusleeping closed 9 years ago
Maybe we could get uname -a
and ulimit -n
into ipfs diag sys
?
I don't seem to have ipfs diag sys
, just ipfs diag net
, even with more recent 0.3.9 version (43622bd5ee).
$ ulimit -n
1024
$ uname -a
Linux Tarjan.ms.mff.cuni.cz 4.1.9 #1-NixOS SMP Thu Jan 1 00:00:01 UTC 1970 x86_64 GNU/Linux
@vcunat when you update your ipfs binary, you have to also restart the daemon.
ipfs diag sys
has been in the codebase since 0.3.8
Ah, I'm sorry. I might've been calling the 0.3.7 binary. (it would also be standard to implement --version)(I should've checked before writing. I see there's ipfs version
instead.)
{
"diskinfo": {
"free_space": 7.8484594688e+10,
"fstype": "btrfs",
"total_space": 1.200330752e+11,
"used_space": 4.1548480512e+10
},
"environment": {
"GOPATH": "",
"IPFS_PATH": ""
},
"ipfs_git_sha": "",
"ipfs_version": "0.3.9",
"memory": {
"swap": {
"free": 3.967270912e+09,
"sin": 3.094478848e+09,
"sout": 5.54899456e+09,
"total": 5.239463936e+09,
"used": 1.272193024e+09,
"used_percent": 24.280976823961865
},
"virt": {
"active": 1.458900992e+09,
"available": 1.761681408e+09,
"buffers": 122880,
"cached": 6.91134464e+08,
"free": 1.05273344e+09,
"inactive": 1.171369984e+09,
"shared": 0,
"total": 4.008493056e+09,
"used": 2.955759616e+09,
"used_percent": 56.05127953600726,
"wired": 0
}
},
"net": {
"interface_addresses": [
"/ip4/127.0.0.1",
"/ip4/192.168.1.242"
]
},
"runtime": {
"arch": "amd64",
"compiler": "gc",
"gomaxprocs": 3,
"numcpu": 4,
"numgoroutines": 87,
"os": "linux",
"version": "go1.5.1"
}
}
ipfs version
works.
Yeah, I found out a few seconds after posting (and edited the post straight away).
ah! my browser tab didnt update. carry on! (and thanks for the info)
@vcunat could you try and reproduce the too many files issue now that youve updated your binary? (and running daemon)
Aha, not anymore. Now I've done many times more work than it was previously certain to reproduce the problems.
Can be closed?
Yeah, can probably close this. seems to be an issue with using older code
@whyrusleeping I know this issue is closed, but I'm having the same problem on the IPFS node I run on my VPS.
The error message (which is repeating every second) is the following:
2016/03/22 15:49:28 http: Accept error: accept tcp4 0.0.0.0:80: accept4: too many open files; retrying in 1s
I leave the ipfs daemon running in screen and it works for a while but breaks after a few hours. The server I'm running this on has only 1 core and 512MB of RAM, could this be the issue?
A bit more information about my use case: I am running one of the latest versions (built from master a few days ago) ipfs version 0.4.0-dev
. I use the ipfs node as an HTTP server using the DNS trick with a TXT record set as dnslink=/inps/<hash>
.
Here are some of the info you requested last time this happened.
➜ ~ ipfs diag sys
{
"diskinfo": {
"free_space": 16452583424,
"fstype": "61267",
"total_space": 1.536206848E+10
},
"environment": {
"GOPATH": "",
"IPFS_PATH": ""
},
"ipfs_commit": "7134930",
"ipfs_version": "0.4.0-dev",
"memory": {
"swap": 0,
"virt": 2.59996E+8
},
"net": {
"interface_addresses": [
"/ip4/127.0.0.1",
"/ip4/188.166.148.252",
"/ip4/10.16.0.6",
"/ip6/::1",
"/ip6/fe80::601:b9ff:fee5:1001"
]
},
"runtime": {
"arch": "amd64",
"compiler": "gc",
"gomaxprocs": 3,
"numcpu": 1,
"numgoroutines": 74,
"os": "linux",
"version": "go1.6"
}
}
➜ ~ ipfs version
ipfs version 0.4.0-dev
➜ ~ uname -a
Linux basilehenry.com 3.13.0-79-generic #123-Ubuntu SMP Fri Feb 19 14:27:58 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux
➜ ~ ulimit -n
1024
On 22.03.2016 21:03, Basile Henry wrote:
➜ ~ ulimit -n 1024
this is actually pretty low for IPFS, as you have to use that amount for open files (chunks, lot's of them) and sockets. Try setting it to 4- or 10000.
I didn't know what ulimit was, after a quick lookup it makes sense that I would want a higher value. I'll try that as soon as I can. Thanks for your help :-)
A couple people report seeing this recently. This will be the tracking issue for the problem.
I ask that anyone encountering the issue post the output of
ipfs diag sys
for me here, as well as any other information about what you did to cause the issue.