ipfs / kubo

An IPFS implementation in Go
https://docs.ipfs.tech/how-to/command-line-quick-start/
Other
16.03k stars 3k forks source link

Deployment plan for AutoNAT v2 #10091

Open sukunrt opened 1 year ago

sukunrt commented 1 year ago

Checklist

Description

We plan to introduce AutoNAT v2 in the next go-libp2p release(v0.31.0)(PR). AutoNAT v2 will allow nodes to test individual addresses for reachability. This fixes issues with address handling within libp2p and allows the node to advertise an accurate set of reachable addresses to its peers. This would also allow nodes to determine IPv6 vs IPv4 connectivity.

When initially deployed a node won't be able to find any AutoNATv2 servers as there would be so few of them, so it cannot use autonat v2 to verify its addresses. So initially we will need to enable the server and not use the client.

There are two considerations for a subsequent deployment that uses the v2 client to verify its addresses.

  1. Finding a Peer

As the adoption of the version with the AutoNATv2 server increases, nodes will be able to find servers for AutoNATv2. This adoption fraction, when nodes can find at least 3 servers is some small multiple of 3 * 1/(dht routing table size). For the ipfs dht the routing table size is roughly 250 nodes which makes this number 1.2%. So once 5% nodes have adopted the version with the AutoNATv2 server, most nodes should be able to find enough autonat v2 servers to verify their addresses.

  1. Server Rate limiting

If the ratio of clients to servers is too high, AutoNATv2 will not be useful as clients won't be able to verify their addresses because of server rate limiting.

In the steady state, when a large chunk(10%?) of the network runs AutoNATv2 servers, this is not a problem. The ratio of DHT nodes in client mode to server mode is roughly 10:1. So we should have a ratio of 10 AutoNATv2 clients to 1 server. If all the clients have 10 addresses each and they verify these 10 addresses every 30 minutes(a very naive implementation), the server needs to verify 100 addresses in 30 minutes which is 4 addresses a minute.

But till we reach this state, there is a possibility that clients update more quickly than servers. This seems likely to me as about 50% of the ipfs network is still on v0.18. I'm not sure how to factor this.

Is there any prior art to such rollouts? I'm especially interested in understanding how to bound the client to server ratio.

Note: I'm ignoring the alternative of gracefully handling the case where AutoNAT v2 peers are not available as the development time for that will be very high. It seems better to just switch over to AutoNATv2 after we have enough servers in the network.

cc @marten-seemann

Jorropo commented 1 year ago

This sounds to be incrementally testable. We turn on the server, then we use the network statistics to decide when the ratio and total deployment both seems ok.

lidel commented 1 month ago

For anyone following this rollout, https://github.com/ipfs/kubo/pull/10468 is scheduled to ship with 0.30.0-rc1 (https://github.com/ipfs/kubo/issues/10436), by default, a publicly diallable node will expose both v1 and v2:

/libp2p/autonat/1.0.0
/libp2p/autonat/2/dial-back
/libp2p/autonat/2/dial-request

My understanding of the plan:

RubenKelevra commented 1 month ago

I haven't looked at the code yet, just a quick question: Is there a blacklist for addresses yet which won't be able to be tested?

I was thinking about RFC1918 as well as RFC4193 addresses. Otherwise offering an AutoNAT server may cause a lot of connection attempts to a private network the server has also access to.

The related bug tracking this would be https://github.com/libp2p/go-libp2p/issues/436.

I just see an issue deploying a new version of AutoNAT without fixing a potential DoS vector of a server into a private network architecture not accessible to the internet.

Sure the potential of harm is slim, but we have had issues with plastic routers at home hanging up because of this in the past.

sukunrt commented 1 month ago

The client(provided in a future release) that uses autonat v2 for verifying reachability, will only test Public IP / DNS Addresses. The client will not query for private IPs and the server will reply with an error for private IPs.

https://github.com/libp2p/go-libp2p/issues/436 is about the node sharing private unroutable IPs with the rest of the network. That's a separate issue unrelated to the service AutoNAT provides. If you're interested in fixing that, a comment explaining your usecase will be very helpful!

I just see an issue deploying a new version of AutoNAT without fixing a potential DoS vector of a server into a private network architecture not accessible to the internet.

Can you elaborate on the attack vector here?

we have had issues with plastic routers at home hanging up because of this in the past.

Can you share more info about this?

RubenKelevra commented 1 month ago

The client(provided in a future release) that uses autonat v2 for verifying reachability, will only test Public IP / DNS Addresses. The client will not query for private IPs and the server will reply with an error for private IPs. [...] Can you elaborate on the attack vector here?

There is none - thanks for your explanation!

we have had issues with plastic routers at home hanging up because of this in the past.

Can you share more info about this?

Yeah sure.

So a client would announce all their IPs, including say a 10.x.x.x IP-Address to the DHT.

Another client would now try to find this client and try to connect to those addresses.

This usually is not a problem, as there's simply no route to those networks in your NAT router and thus it can respond with an ICMP that there's no route.

But this changes as soon as we talk about carrier grade NAT.

In this case your router has a route but seems to get no ICMP back from the ISP infrastructure due to ICMP firewalling.

This leads to a lot of half-open NAT connections which will never get a response, until the plastic router at home will randomly start to drop actually useful connections as the routing table overflows.

There have been a couple of issues around this, like https://github.com/ipfs/kubo/issues/3320 and https://github.com/ipfs/kubo/issues/9998 and the only solution is either to manually create a connection filter to all RFC1918 subnets in the configure, except your own and move your own subnet to something less often used. So avoid 192.168.0.0/24, 192.168.1.0/24 and 192.168.178.0/24 - because the connection filter will also block local connections to other peers in your own network - because there's only one scope for addresses: https://github.com/libp2p/go-libp2p/issues/793