mrusme / superhighway84

USENET-inspired, uncensorable, decentralized internet discussion system running on IPFS & OrbitDB
https://xn--gckvb8fzb.com/superhighway84
GNU General Public License v3.0
720 stars 23 forks source link

Limit number of connections #14

Closed mrusme closed 2 years ago

mrusme commented 2 years ago

It has been reported that the number of connections IPFS creates makes it tricky to run Superhighway84 on especially older hardware.

It was mentioned that apparently this is a known issue in IPFS:

The libp2p team is currently refactoring the "dialer" system in a way that'll make it easy for us to configure a maximum number of outbound connections. Unfortunately, there's really nothing we can do about inbound connections except kill them as soon as we can. On the other hand, having too many connections usually comes from dialing.

I couldn't find out what the status of the libp2p refactoring is though. However, people are mentioning that disabling QUIC in the IPFS repository has helped a bit with the issue, e.g.:

... ipfs init --profile server, and (3) I removed the lines of quic under Addresses.Swarm in ~/.ipfs/config.

and

I fust followed the advice here of disabling QUIC ipfs config --json Swarm.Transports.Network.QUIC false

Another idea could be to try and press Superhighway84 into a set of iptables rules that limit its available bandwidth and the amount of connections artificially. I didn't investigate whether that really works, but apparently other users have done that for IPFS.

Jay4242 commented 2 years ago

It's a pretty serious problem. Anything over 2,000 peers makes it unusable on some hosts. Why does it seem to ignore the highwater so much? I have it set to 10. It's going into 2,000. This is also the only time I see https://github.com/mrusme/superhighway84/issues/3 and other errors. OOM killer comes hunting it at that point.

mrusme commented 2 years ago

Currently digging through the IPFS godocs to see if I can find anything. Right now I'm looking at the Swarm.ConnMgr and trying to figure out if for some reason "none" might be set for Superhighway84 - which I don't believe, though, as the default is "basic" and I didn't remember to have messed around with that.

mrusme commented 2 years ago

@Jay4242 just an idea, but can you try running the official IPFS daemon on the very same repo and see how that acts over a longer period of time? E.g. if the OOM killer also at some point triggers and how connections behave there.

Jay4242 commented 2 years ago

It did calm down a bit, there was an initial spike up to 2k then if it survived that it went lower. ~200-600 peers. When I run the ipfs daemon in that path and run ipfs swarm peers | wc -l in a loop it does spike into the hundreds initially then drop down to ~10-20. Maybe a separate issue, but a screen re-draw key combination could be handy for clearing errors, although reading an article and exiting also clears the screen.

mrusme commented 2 years ago

Interesting. I've been running 0.0.4 on a VPS for the whole day now and it didn't crash nor reported any "too many open files" issues.

Maybe a separate issue, but a screen re-draw key combination could be handy for clearing errors, although reading an article and exiting also clears the screen.

Yeah, unfortunately running tview's Application.Redraw() can lead to the whole application freezing, for reasons that I didn't bother to investigate yet.

mrusme commented 2 years ago

A few more issues in go-ipfs that might be related:

mrusme commented 2 years ago

Connections seem to be down to little over 300 now. Maybe that's due to the latest change in master or it's simply a time when not many peers are online.

screenshot_2021-12-29-004829

Also CPU usage and network usage alike are very decent right now.

screenshot_2021-12-29-004905

mrusme commented 2 years ago

@Jay4242 please try setting your IPFS repository to lowpower profile and test with that:

ipfs config profile apply lowpower

See this for more info.

Jay4242 commented 2 years ago

Seems much better now. I did have it on lowpower before, even setting it lower. That's why I was so baffled that it was hitting as high as it was.
Thanks for the updates!

mrusme commented 2 years ago

I will close this issue for now but feel free to comment if things start getting worse again. I also talked to the folks over at ipfs on matrix and it seems that upgrading IPFS from the currently used 0.9.1 version to the latest might also help with performance. However, since a few interfaces have changed, it would require me to update go-orbit-db which is a bigger task.

Winterhuman commented 2 years ago

@mrusme Btw, also check the value of GracePeriod, during that time all low and high water values are ignored in order to build up connections; consider lowering the value to decrease the amount of time new connections can form in.