ipfs / kubo

An IPFS implementation in Go
https://docs.ipfs.tech/how-to/command-line-quick-start/
Other
15.83k stars 2.96k forks source link

Resource Constraints + Limits #1482

Closed jbenet closed 2 years ago

jbenet commented 8 years ago

We need a number of configurable resource limits. This issue will serve as a meta-issue to track them all and discuss a consistent way to configure/handle them.

I'm going to use a notation like thingA.subthingB.subthingC. we dont have to keep this at all, just helps us bind scoped names to things. (using . instead of / as the . could reflect json hierarchy in the config, but it may not have to (e.g. repo.storage_max and repo.datastore.storage_gc_watermark could be in config as Repo.StorageMax and Repo.StorageGC, or something.).

Possible Limits

This is a list of possible limits. I don't think we need all of them, as other tools could limit this more, particularly in server scenarios. but please keep in mind that some users/use cases of ipfs demand that we have some limits in place ourselves, as many end users cannot be expected to even know what a Terminal is (e.g. if they run ipfs as an elecron-app or as a browser extension).

note on config: the above keys need not be the config keys, but we should figure out some keys that make sense hierarchically.

What other things are we interested in limiting?

jbenet commented 8 years ago

The most pressing are:

jbenet commented 8 years ago

@rht would this be an issue you could work on? it's needed sooner than later. particularly node.repo.storage_max (+ running GC if we get close to it) and node.network_bandwidth_max.

@whyrusleeping your help will be needed no matter who implements this.

whyrusleeping commented 8 years ago

@jbenet yeap. My concern is that before we even think about configurable limits and such, we need to determine how the system behaves when you are out of a certain resource, whether thats open connections, disk space, or memory. Once we determine how a limit will be manifest in the application, we can start setting those limits.

jbenet commented 8 years ago

We already know how some of those would behave, for example, disk. Trigger gc after a threshold, and stop accepting blocks after the limit.

— Sent from Mailbox

On Wed, Jul 15, 2015 at 12:36 PM, Jeromy Johnson notifications@github.com wrote:

@jbenet yeap. My concern is that before we even think about configurable limits and such, we need to determine how the system behaves when you are out of a certain resource, whether thats open connections, disk space, or memory. Once we determine how a limit will be manifest in the application, we can start setting those limits.

Reply to this email directly or view it on GitHub: https://github.com/ipfs/go-ipfs/issues/1482#issuecomment-121722705

whyrusleeping commented 8 years ago

okay, when we stop accepting blocks, how does that affect the user? Do we just start returning 'error disk full' up the stack everywhere? (probably)

jbenet commented 8 years ago

yeah, it's a write error. same would happen if the OS's disk got full.

On Wed, Jul 15, 2015 at 1:12 PM, Jeromy Johnson notifications@github.com wrote:

okay, when we stop accepting blocks, how does that affect the user? Do we just start returning 'error disk full' up the stack everywhere? (probably)

— Reply to this email directly or view it on GitHub https://github.com/ipfs/go-ipfs/issues/1482#issuecomment-121732231.

davidar commented 8 years ago

:+1: the daemon keeps consuming my meager ADSL upload bandwidth

jbenet commented 8 years ago

These are a big deal, we should get back on these.

slothbag commented 8 years ago

My VPS runs out of RAM pretty quickly with IPFS consuming 80% of it (this is not adding, just idling).. other daemons start to shut down due to out of memory.

Granted my VPS has only 128 or 256mb (cant remember which), but still, I would think its possible to seed some content with minimal resources.

jbenet commented 8 years ago

agreed. we should start adding memory constraints as tests for long running nodes to ipfs

rht commented 8 years ago

Update here:

jbenet commented 8 years ago

Thanks for update @rht

Re limits, i think people will mostly want to set hard BW caps in explicit KB/s.

SCBuergel commented 8 years ago

What other things are we interested in limiting?

I just randomly found this discussion while trying to limit the overall output traffic (per day / month). I think limiting output traffic could be an interesting thing (especially with respect to file coin one day) as egress traffic is typically limited in cloud settings like AWS or Azure. There I am fine with temporary spikes of high bandwidth as long as my output traffic stays within some bounds per unit of time. Setting a limit per hour / day / month might make sense to prevent from blowing a months volume in a day / hour.

PlanetPlan commented 8 years ago

Hi, thanks very much for IPFS.

I did not carefully read the above, so some of the following may be duplicates. This is all long-term things to think about, nothing that is a headache for me right now. The following are some usage models that may suggest features for controlling resources:

clownfeces commented 7 years ago

For vpn users, being able to limit the maximum number of connections is a very important feature, since many vpns automatically disconnect you if you have to many open connections (it's probably some sort of protection to fight spammers and ddosers). IPFS by default creates hundreds of connections, so its barely usable, unless you don't care if you regularly get disconnected.

davidak commented 7 years ago

I want to report some resource usage stats:

I have an ipfs node version 0.4.2 running on a VM with 1 core and 1 GB RAM. No files added or pinned!

bildschirmfoto 2016-08-06 um 18 00 41 bildschirmfoto 2016-08-06 um 18 01 09 It uses 465 MB RAM just to keep connections to 214 peers open. (are that all running nodes?)

Kubuxu commented 7 years ago

It means that it is directly in connection with 214 peers, those are live nodes in the network, we might want to start limiting that. Deluge (torrent client) by default allows for 200 connections and only 50 active at the time, but it uses utp which we were unable to do successfully due to utp lib for Go hanging.

@davidak is that netdata collector for IPFS? Looks nice, have you published it somewhere?

davidak commented 7 years ago

@Kubuxu the IPFS netdata plugin just got merged some minutes ago ;)

https://github.com/firehol/netdata/pull/761

fiatjaf commented 7 years ago

What bothers me is the network usage:

Makes even ssh'ing to my VPS horribly slow.

slothbag commented 7 years ago

I've had some luck using linux "tc" command to throttle IPFS down to about 10KB/s outbound.. this has the side-effect of dropping incoming down to about 15-20KB/s

I can see IPFS is using 100% of its allocated 10KB/s all day every day, but at least I can calculate how much bandwidth that is per month to ensure I don't go over my quotas.

And a nice bonus is it significantly reduces memory usage, which is now hovering around 50-100Mb.

jbenet commented 7 years ago

@slothbag does it work in that condition? On Mon, Aug 8, 2016 at 19:24 slothbag notifications@github.com wrote:

I've had some luck using linux "tc" command to throttle IPFS down to about 10KB/s outbound.. this has the side-effect of dropping incoming down to about 15-20KB/s

I can see IPFS is using 100% of its allocated 10KB/s all day every day, but at least I can calculate how much bandwidth that is per month and ensure I don't go over my quotas.

And a nice bonus is it significantly reduces memory usage, which is now hovering around 50-100Mb.

— You are receiving this because you were mentioned. Reply to this email directly, view it on GitHub https://github.com/ipfs/go-ipfs/issues/1482#issuecomment-238408169, or mute the thread https://github.com/notifications/unsubscribe-auth/AAIcoVvIbJott0tMhgPRIq8P8Kv9pdA3ks5qd7q9gaJpZM4FZWAT .

slothbag commented 7 years ago

Yup, it's very slow obviously, but i'm just seeding content (gx packages mainly). It takes maybe a few minutes for a 500Kb-1Mb package to be sent out, which only has to happen once then its cached on my other nodes.

jbenet commented 7 years ago

@slothbag I'm glad it actually works :) On Mon, Aug 8, 2016 at 19:46 slothbag notifications@github.com wrote:

Yup, it's very slow obviously, but i'm just seeding content (gx packages mainly). It takes maybe a few minutes for a 500Kb-1Mb package to be sent out, which only has to happen once then its cached on my other nodes.

— You are receiving this because you were mentioned. Reply to this email directly, view it on GitHub https://github.com/ipfs/go-ipfs/issues/1482#issuecomment-238411801, or mute the thread https://github.com/notifications/unsubscribe-auth/AAIcof8SRCJ34u48Z9_xsv17rGjflkdsks5qd7_LgaJpZM4FZWAT .

pjz commented 7 years ago

node.network_bandwidth_max is too coarse: how about node.network_bandwidth_max.in and node.network_bandwidth_max.out ? VPSs often have asymmetric costs for bandwidth; also, things like http-gateways may want to rate-limit their outgoing bandwidth so their proxy performance isn't impacted by serving blocks via ipfs.

maxlath commented 7 years ago

Is there an explanation somewhere to better understand what is happening with all those data coming in and out? I have a daemon running for 2 or 3 days and this is what I got:

$ ipfs stats bw
Bandwidth
TotalIn: 3.4GB
TotalOut: 6.6GB
RateIn: 35KB/s
RateOut: 12KB/s

I haven't added much (like one 30KB gif), without sharing the hash with anyone: where is all this activity coming from? oO

[Edit: ipfs version 0.4.2]

zekesonxx commented 7 years ago

@maxlath other people using your node to retrieve files. This is a big part of what this issue is about, since it's not okay to a lot of people (including myself).

pjz commented 7 years ago

My understanding is that it's DHT traffic, nothing to do with actual file retrieval.

fiatjaf commented 7 years ago

I have a mostly subjective question and, although here is not the best place to ask, I'll ask anyway (if I sound like a jerk, please blame my bad English skills instead of @blaming me):

you people who are writing IPFS, or writing fancy things on top of IPFS, or storing important stuff on IPFS, how do you manage to keep a personal node running? No VPS can cheaply handle that, no home internet would survive to such network-eagerness. At this point, with so much resource usage, IPFS seems like an horrible virus that only stops when the victim is dead, unsuitable for any computer. How do you do it? And why isn't anything about the current solutions to this problem mentioned in the beginner tutorials?

SCBuergel commented 7 years ago

:+1:

whyrusleeping commented 7 years ago

@fiatjaf Thanks you for voicing your concerns, its always good to have honest opinions from the community.

We definitely know that this is an issue and are working pretty consistently towards fixing it. Some info about recent work towards this:

There are a good number more minor fixes related to optimizations of buffering outgoing packets as well.

As far as "How do you keep a node running". Personally, I have two nodes running full time at home, one of these on 0.4.2, one on latest master. These are the nodes that I initially seed the prebuilt binary distributions from. They have a constant upload between 80KB/s to 300KB/s, which is definitely high, but not completely absurd as it has been in the past. The node running master is nearly always on the low bound around 80KB/s, while the 0.4.2 node hangs out more frequently at ~300KB/s (sometimes lower, sometimes much higher).

Edit: Pulled up our gateways metrics page, They are averaging around 700kB/s. Thats the main bootstrap nodes and the nodes behind ipfs.io.

So yeah, bandwidth usage is far from ideal, but its improving quickly. You all can definitely help us out. Feedback is really important, letting us know how much bandwidth ipfs is using under what workloads and program versions helps. Trying out newer development versions and letting us know how they behave is also very appreciated. Helping us out to fix bugs and generally move development forward (even if not directly related to improving the bandwidth situation) helps us have more time to solve these hard problems. And anyone who has a solid knowledge of distributed systems in general can help by giving us feedback on how we can move forward our content routing systems.

Thanks

zekesonxx commented 7 years ago

@whyrusleeping I have awful Internet: 130 KiB/s peak down and 20 KiB/s peak up (mutually exclusive). I think IPFS is massively interesting, but any idle bandwidth usage is not acceptable for my connection.

whyrusleeping commented 7 years ago

@zekesonxx Yeah, having a connection like that makes it difficult to properly utilize ipfs right now due to our use of the dht for content routing. The vast majority of the idle traffic comes from the dht, participating in the dht means that you are helping store routing information and peer information for the network and also responding to lookup requests of that information. As i mention in my previous comment we are working on future solutions that will allow nodes to be a part of the ipfs network without having to run a dht.

loadletter commented 7 years ago

@whyrusleeping Wouldn't it be possible to run the dht over plain, connectionless, unreliable UDP, with a compact format/compression and only use tcp for transfers/fallback, something like /ip4/0.0.0.0/udp/4001/dht ? Both eMule/Kad and bittorrent dht manage to work with very little overhead (running eMule on a 56k connection with thousands of files and 2K+ dht entries comes to mind)

whyrusleeping commented 7 years ago

@loadletter Yeah, using udp for the dht is something I want to try at some point, but its not a magic bullet. The benefit of UDP is that we really don't care too strongly if packets get dropped, but what we lose out on from tcp (or similar) is the congestion control. If we switch the dht to udp theres a likely chance it would actually make things worse. When comparing ipfs to bittorrents mainline DHT (or any other form of DHT) the primary difference is that ipfs provides random access to subsets of files. Where with torrents and similar systems, its more of an 'all or none', you're part of this torrent and/or part of that torrent, pieces of one torrent don't ever get shared with another. Some of the advantages here are that it becomes much easier to view and work with subsets of large datasets, share and help pin only the pieces you're interested, and also if some number of different datasets share segments of data, users who have either dataset can serve that data. The cost for that is that we need to do a good chunk more content routing to accomplish it, using the DHT for content routing the same way that other systems do just isnt long term scalable, which is why we're researching newer ways to provide this 'random access' without (or with less) dht traffic (with with no dht at all).

In the short term though, I have just merged a new still very experimental feature that allows your node to choose not to serve dht traffic, while still being able to make requests. To try this out, build latest master from source and run the daemon with ipfs daemon --routing=dhtclient. And please report any issues you have with running it in that mode.

loadletter commented 7 years ago

Using dhtclient the idle bandwidth usage does seem to decrease, tough it picks up again for a few minutes after retrieving some files from another node.

Also when running normal dht or during spikes with dhtclient the bandwidth usage looks pretty symmetrical.

mib-kd743naq commented 7 years ago

 node.repo.storage_max: this affects the physical storage that a repo takes up. this must include all the storage, datastore + config file size (ok to pre-allocate more if neeeded), so that people can set a maximum. (MUST be user configurable)

I think given https://github.com/ipfs/go-ipfs/issues/3444 one also needs to add a config for maximum data entries. Without such a limit it is trivial to hard-DoS a node by simply asking it to get a DAG with 1 million 1-byte raw data nodes.

/cc @matthiasbeyer @Kubuxu

ghost commented 7 years ago

Small note, I unchecked the storage_max todo in this thread's root comment -- there is a Datastore.StorageMax option, but it's currently only taken into account with regard to GC. It doesn't currently set a hard limit on storage usage.

pataquets commented 6 years ago

Where applicable, different bw limits for pinned items would be a nice feature to have. Users might be more inclined to providing bandwidth for files they find important enough to pin.

ajbouh commented 6 years ago

For my own use case this would be quite valuable. Not all workers I add to the network should serve all files equally. Files they create should be served with much higher priority than files they need and mirror.

On Aug 28, 2017 1:12 PM, "Alfonso Montero" notifications@github.com wrote:

Where applicable, different bw limits for pinned items would be a nice feature to have. Users might be more inclined to providing bandwidth for files they find important enough to pin.

— You are receiving this because you are subscribed to this thread. Reply to this email directly, view it on GitHub https://github.com/ipfs/go-ipfs/issues/1482#issuecomment-325466578, or mute the thread https://github.com/notifications/unsubscribe-auth/AAAcnUNsThKVpfxBeOBajwvOi7IY7Guuks5scx8_gaJpZM4FZWAT .

dokterbob commented 6 years ago

+1 for node.memlimit

Although @jbenet suggests we can have this done on a higher level, a long-running actively used IPFS daemon will currently eat all memory available on a system which basically means that, without memory constraints it will not be stable.

Obviously, the memory footprint (#3318) could be reduced but given that the project moves forward very fast feature wise, there will be new kinds of memory waste popping up.

haasn commented 6 years ago

ipfs for me has several hundreds of open connections, which triggers a number of warning mechanisms including TCP resets/s (many dozens) and makes it look like a network scan.

Connecting to this many peers seems insane for a p2p network. Being able to limit this would be a high priority for me.

whyrusleeping commented 6 years ago

This is resolved in the next release, try out the release candidate for 0.4.12

On Tue, Oct 31, 2017, 9:47 PM Niklas Haas notifications@github.com wrote:

ipfs for me has several hundreds of open connections, which triggers a number of warning mechanisms including TCP resets/s (many dozens) and makes it look like a network scan.

Connecting to this many peers seems insane for a p2p network. Being able to limit this would be a high priority for me.

— You are receiving this because you were mentioned. Reply to this email directly, view it on GitHub https://github.com/ipfs/go-ipfs/issues/1482#issuecomment-340963132, or mute the thread https://github.com/notifications/unsubscribe-auth/ABL4HHqGd5sX9DGVcO8-4sWzZ6S6pqnPks5sx9vQgaJpZM4FZWAT .

gwpl commented 6 years ago

I need also limit for maximum open files! (causes: https://github.com/ipfs/go-ipfs/issues/4589 )

KrzysiekJ commented 6 years ago

@whyrusleeping: go-ipfs v0.4.13 still maintains several hundreds of open connections.

whyrusleeping commented 6 years ago

@KrzysiekJ Yeah, DHTs need to maintain a decent number of open connections for proper functioning. You can tweak it lower in your configuration file, Look for Swarm.ConnMgr

EternityForest commented 6 years ago

Does the DHT actually need to maintain large numbers of connections to work? It seems like you need to know the locations of a good number of DHT peers, but why actually connect to them?

Can't we just keep a list of a few thousand peers, and figure out if they're still up if/when they're needed?

Connectionless DHT queries should only take 1 UDP round trip per hop if you don't use a handshake or encryption, and it's not like you can't monitor someone pretty easily as is(Connect to them, and watch their wantlist broadcasts).

Congestion doesn't seem like it should be that much of an issue, especially if you limit retries, If they aren't there after 3 or 4 attempts, you just assume they aren't online anymore and try a different path.

An advantage of connectionless is that you can potentially store the last known IP of millions of nodes, meaning most of the network can be within 2 or 3 hops.

That has the issue of concentrating traffic on a few nodes for popular content, but I suspect there's ways of managing that.

Stebalien commented 6 years ago

Does the DHT actually need to maintain large numbers of connections to work? It seems like you need to know the locations of a good number of DHT peers, but why actually connect to them?

Correct. Unfortunately, we don't have any working UDP based protocols at the moment anyways. However, we're working on supporting QUIC. While this wouldn't be a connection-less protocol, connections won't take up file descriptors and we can save memory/bandwidth by "suspending" unused connections (remember the connection's session information but otherwise go silent).

In the future, we'd like a real packet transport system but we aren't there yet. The tricky part will be getting the abstractions right will take a bit of work because we try to make all parts of the IPFS/libp2p stack pluggable.

Connectionless DHT queries should only take 1 UDP round trip per hop if you don't use a handshake or encryption, and it's not like you can't monitor someone pretty easily as is(Connect to them, and watch their wantlist broadcasts).

The encryption isn't just about monitoring, it also prevents middle boxes from being "smart". However, as we generally don't care about replay or perfect forward secrecy for DHT messages, we may be able to encrypt these requests without creating a connection (although that gets expensive if we send more than one message). Again, the tricky part will be getting the abstractions correct (and, in this case, not creating a security footgun).

An advantage of connectionless is that you can potentially store the last known IP of millions of nodes, meaning most of the network can be within 2 or 3 hops.

Unfortunately, IPFS nodes tend to go offline/online all the time. Having connections open helps us keep track of which ones are online. However, the solution here is to just not have flaky nodes act as DHT nodes.

andrewchambers commented 6 years ago

FWIW: Many operating systems provide facilities for limiting all of those things e.g. consider using linux containers and separate disk partitions. It is then up to ipfs to just handle error conditions returned by the OS properly.

dokterbob commented 6 years ago

Your suggestion is strongly against the long standing common practice of Unix daemon design, where daemons should manage their own footprints and only in error conditions should the OS interfere.

For example, most forking servers allow the amount of processes to be limited (i.e. Apache, PHP-FPM, Postfix, etc.). Many also allow limits to the memory used (i.e. Elasticsearch, MySQL). In addition, for disk caches it's normal to have hard and soft limits set.

Most system administrators consider a daemon that, when unrestrained, just eats up all the resources in a system, to be badly designed. Only recently have these types of behaviours become sonewhat tolerated, but really only amongst users of stuff like Docker.

Mind you, many operating systems do not support such newfangled tools and whether or not it is actually safe to rely on will have to be proven (consider the large amount of security issues in the early Xen days).

andrewchambers notifications@github.com schreef op 9 april 2018 02:20:09 GMT+01:00:

FWIW: Many operating systems provide facilities for limiting all of those things e.g. consider using linux containers and separate disk partitions. It is then up to ipfs to just handle error conditions returned by the OS properly.

-- Verstuurd vanaf mijn Android apparaat met K-9 Mail. Excuseer mijn beknoptheid.

Macil commented 6 years ago

If you make the OS / docker limit the memory that ipfs uses, then will ipfs be careful to use less than that amount? If not, ipfs might just keep charging headfirst into the limit and get regularly killed/restarted by the system.