Closed cinderblock closed 8 years ago
it really should not scan internal IPs (it currently does try to dial other peers at their internal IPs in hopes that theyre on the same LAN). We have multicast DNS for finding peers on the local network, we should filter out local IP's from our advertised list.
@jbenet o/
I've seen these reports before as well.. Duplicate of #1173
it really should not scan internal IPs (it currently does try to dial other peers at their internal IPs in hopes that theyre on the same LAN). We have multicast DNS for finding peers on the local network, we should filter out local IP's from our advertised list.
@whyrusleeping (a) Multicast DNS does not work all the time. It is often disabled in many networks-- It's happened at 2/4 talks i've given recently-- and even OSes. (And it certainly does not work for containers.) (b) Look at the WebRTC standard. Dialing local network addresses is precisely how it works. I'm tired of having to justify this over and over.
Now, there are many ways to fix this sort of thing. For example, just two among many:
192.168.0.0/16
address when not within that subnet. This alone will cut out most -- if not all -- of the sysadmin netscan warnings. Most VPSes are in different networks.I suggest also looking at the silencing/niceness heuristics other (aggressively local) p2p applications use.
Getting these down would go a long way for people trying to run go-ipfs in VPSes at providers that (rightly!) are concerned about random processes trying to dial lots of local addresses.
We received a similar letter from a dedicated server provider.
Long term, I really do see this as something ISPs need to become more comfortable with, as the web adjusts to a more decentralized model, and in the case of IPFS, even datacenters become the home to localized caches of content (and it's a good thing for them overall).
That said, in the interim they treat most of this sort of activity as being of a malicious nature. So a way to turn it off for is needed for now until more widespread adoption takes place.
I think your next steps would solve this issue for us.
Also, it's possible that a firewall rule could be used as a workaround for now. I'm not sure what that rule would look like, I'm not very savvy with iptables.
so an iptables solution to this would be to just block outgoing connections to other 'internal' networks like so:
iptables -A OUTPUT -d 172.17.2.0/24 -j REJECT
iptables -A OUTPUT -d 192.168.0.0/16 -j REJECT
and so on, for any other networks that you are accused of scanning. I personally dont think this is a good approach, but it may work in the short term.
@aSmig gave me some great feedback on iptables usage, and recommend this as a workaround:
iptables -A OUTPUT -d 10.0.0.0/8 -p tcp --sport 4001 --dport 4001 -j REJECT
iptables -A OUTPUT -d 172.16.0.0/12 -p tcp --sport 4001 --dport 4001 -j REJECT
iptables -A OUTPUT -d 192.168.0.0/16 -p tcp --sport 4001 --dport 4001 -j REJECT
This will block all private scans. Not ideal obviously, but all of the netscans I've gotten complaints about were related to local IP scanning.
If you're running Ubuntu, this service will persist the settings:
sudo apt-get install iptables-persistent
You may need to disable UFW if it is running (and then iptables -F), or make a version of these rules that uses UFW instead of iptables.
I'll report back if I get another netscan warning.
@kyledrake thank you!
Also got a netscan report from my hoster looking quite similar to that one in the original post for this issue. Solved by some iptables rules quite similar to that ones @kyledrake posted above:
iptables -A OUTPUT -d 192.168.0.0/16 -o eth0 -p tcp -m tcp -j DROP
iptables -A OUTPUT -d 10.0.0.0/8 -o eth0 -p tcp -m tcp -j DROP
iptables -A OUTPUT -d 172.16.0.0/12 -o eth0 -p tcp -m tcp -j DROP
In this case I had the chance to block all transfer to private IPs using the external interface as the machine does not have any private networking on that interface.
Just a quick update that I have not had any more complaints from our DCO since we installed these filters.
@kyledrake thanks, good to know! still need to put this into IPFS soon. hopefully into 0.3.6 or 0.3.7
Just got another one:
Sat Jun 13 13:27:27 2015 TCP MYIP 59245 => 172.17.0.112 4001
Sat Jun 13 13:27:27 2015 TCP MYIP 54851 => 172.17.0.113 4001
Sat Jun 13 13:27:27 2015 TCP MYIP 50660 => 172.17.1.20 4001
Sat Jun 13 13:27:27 2015 TCP MYIP 51793 => 172.17.1.21 4001
...
Which is really weird, since the iptables policy is in place:
Chain OUTPUT (policy ACCEPT)
target prot opt source destination
REJECT tcp -- anywhere 10.0.0.0/8 tcp spt:4001 dpt:4001 reject-with icmp-port-unreachable
REJECT tcp -- anywhere 172.16.0.0/12 tcp spt:4001 dpt:4001 reject-with icmp-port-unreachable
REJECT tcp -- anywhere 192.168.0.0/16 tcp spt:4001 dpt:4001 reject-with icmp-port-unreachable
So I'm not sure what's going on here. The above rule doesn't cover 172.17 for some reason? Ideas welcome.
I also found this list that claims to be all the private nets (RPC 1918), copied from here:
iptables -A valid-src -s 10.0.0.0/8 -j DROP
iptables -A valid-src -s 172.16.0.0/12 -j DROP
iptables -A valid-src -s 192.168.0.0/16 -j DROP
iptables -A valid-src -s 224.0.0.0/4 -j DROP
iptables -A valid-src -s 240.0.0.0/5 -j DROP
iptables -A valid-src -s 127.0.0.0/8 -j DROP
iptables -A valid-src -s 0.0.0.0/8 -j DROP
iptables -A valid-src -d 255.255.255.255 -j DROP
iptables -A valid-src -s 169.254.0.0/16 -j DROP
iptables -A valid-src -s $EXTERNAL_IP -j DROP
iptables -A valid-dst -d 224.0.0.0/4 -j DROP
Use at your own risk. I haven't edited this to make it useful, and I have no idea what $EXTERNAL_IP does, and it may not be what you want.
172.17.. is definitely covered by 172.16.0.0/12. The report indicates that the source ports are in the 50000-60000 range. Your rules only match when source and destination port are 4001. Pull the --sport 4001 out of your commands to match any source port.
The valid-src chain described above has a few issues, including blocking all outbound traffic if you specify your external IP. Most of the rules block outbound traffic only when the source IP matches a private network, but you want to match against destination IP's. If you really want to block any and all traffic to private network ranges on a given external interface, this should get you closer:
EXTERNAL_IF=eth0 # or whatever interface connects to your ISP
iptables -A valid-out -d 10.0.0.0/8 -j REJECT
iptables -A valid-out -d 172.16.0.0/12 -j REJECT
iptables -A valid-out -d 192.168.0.0/16 -j REJECT
iptables -A valid-out -d 224.0.0.0/4 -j REJECT
iptables -A valid-out -d 240.0.0.0/5 -j REJECT
iptables -A valid-out -d 127.0.0.0/8 -j REJECT
iptables -A valid-out -d 0.0.0.0/8 -j REJECT
iptables -A valid-out -d 255.0.0.0/8 -j REJECT
iptables -A valid-out -d 169.254.0.0/16 -j REJECT
iptables -A valid-out -d 224.0.0.0/4 -j REJECT
iptables -A OUTPUT -o $EXTERNAL_IF -j valid-out # make sure this happens before a global ACCEPT
# Use this instead of the previous line if you only want to block traffic to port 4001
#iptables -A OUTPUT -o $EXTERNAL_IF -p tcp --dport 4001 -j valid-out
we should up the priority on this and get it out sooner.
We now have ip/cidr connection filtering: https://github.com/ipfs/go-ipfs/issues/1226 https://github.com/ipfs/go-ipfs/pull/1378
could someone:
Is this the format you ended up going with?: https://github.com/ipfs/go-ipfs/pull/1378#issuecomment-112609123
@kyledrake the format is /ip4/192.168.0.0/ipcidr/16
which is equivalent to just 192.168.0.0/16
i added another $10 to this issue: https://www.bountysource.com/issues/14335371-daemon-triggers-a-netscan-alert-from-hosting-company
So this should be fixed, and @whyrusleeping fixed it
(though would love people to play with it, make sure it does fix things, and make an example)
what's the PR#?
So, if i close this issue, i acquire currency?
@whyrusleeping what's an example config here? does this look right:
{ // in config
"DialBlockList": [
"/ip4/192.168.0.0/ipcidr/16",
"/ip4/172.10.1.0/ipcidr/28"
]
}
Based on:
Would this also work with IPv6? /ip6/fc00::/ipcidr/8
@lgierth https://github.com/whyrusleeping/multiaddr-filter/blob/master/mask.go#L11 apparently not :/ -- cc @whyrusleeping
mmm, yeah... thats an easy fix.
This should be all the needed filters for IPv4 private networks:
{
"DialBlockList": [
"/ip4/10.0.0.0/ipcidr/8",
"/ip4/172.16.0.0/ipcidr/12",
"/ip4/192.168.0.0/ipcidr/16",
"/ip4/100.64.0.0/ipcidr/10"
]
}
@lgierth we should have ipv6 support shortly: https://github.com/whyrusleeping/multiaddr-filter/pull/2
cc @kyledrake @Luzifer
Ok round 3! https://github.com/ipfs/go-ipfs/pull/1433 just merged, which fixes the filters loading from the config. but the filters moved location slightly, they're now at:
{
"Swarm": {
"AddrFilters": [ ]
}
}
So set them with this line:
ipfs config --json Swarm.AddrFilters '[
"/ip4/10.0.0.0/ipcidr/8",
"/ip4/172.16.0.0/ipcidr/12",
"/ip4/192.168.0.0/ipcidr/16",
"/ip4/100.64.0.0/ipcidr/10"
]'
you should get
> ipfs config Swarm.AddrFilters
[
"/ip4/10.0.0.0/ipcidr/8",
"/ip4/172.16.0.0/ipcidr/12",
"/ip4/192.168.0.0/ipcidr/16",
"/ip4/100.64.0.0/ipcidr/10"
]
FYI, the authoritative list of non-Internet-routable ipv4 address ranges can be found on IANA's site. Anything with False in the Global column is not globally routable. There is a similar list for ipv6.
Haven't had a chance to test the new filters yet with the fix, but I wanted to share my latest flavor of the iptables block:
/sbin/iptables -A OUTPUT -d 10.0.0.0/8 -p tcp --dport 4001 -j REJECT
/sbin/iptables -A OUTPUT -d 172.16.0.0/12 -p tcp --dport 4001 -j REJECT
/sbin/iptables -A OUTPUT -d 192.168.0.0/16 -p tcp --dport 4001 -j REJECT
/sbin/iptables -A OUTPUT -d 100.64.0.0/10 -p tcp --dport 4001 -j REJECT
/sbin/iptables -A OUTPUT -d 192.0.2.0/24 -p tcp --dport 4001 -j REJECT
/sbin/iptables -A OUTPUT -d 198.51.100.0/24 -p tcp --dport 4001 -j REJECT
/sbin/iptables -A OUTPUT -d 203.0.113.0/24 -p tcp --dport 4001 -j REJECT
/sbin/iptables -A OUTPUT -d 198.18.0.0/15 -p tcp --dport 4001 -j REJECT
Note that I've taken out the source port.
could someone test the new filters? would love to know whether this is fixed or not
(and thanks @kyledrake for the new table)
I'm not able to connect to another node on my LAN with the filters set appropriately
@kyledrake, you can reset your rule match counters with iptables -Z. Then check them a week later to see if anything got past the built-in filters and was blocked by your firewall. To show only rules that have matched packets, you can do this:
iptables -nvL | awk '$1!=0{print}'
This helps with testing and ensures your ISP won't get grumpy.
Starting test on Hetzner server… We'll see whether there is a netscan alert…
Rule-Set:
"Swarm": {
"AddrFilters": [
"/ip4/10.0.0.0/ipcidr/8",
"/ip4/100.64.0.0/ipcidr/10",
"/ip4/169.254.0.0/ipcidr/16",
"/ip4/172.16.0.0/ipcidr/12",
"/ip4/192.0.0.0/ipcidr/24",
"/ip4/192.0.0.0/ipcidr/29",
"/ip4/192.0.0.8/ipcidr/32",
"/ip4/192.0.0.170/ipcidr/32",
"/ip4/192.0.0.171/ipcidr/32",
"/ip4/192.0.2.0/ipcidr/24",
"/ip4/192.168.0.0/ipcidr/16",
"/ip4/198.18.0.0/ipcidr/15",
"/ip4/198.51.100.0/ipcidr/24",
"/ip4/203.0.113.0/ipcidr/24",
"/ip4/240.0.0.0/ipcidr/4"
]
},
(Networks from iana list @aSmig posted above)
@Luzifer can you confirm that the swarm has them on? ipfs swarm filters
?
# docker exec ipfs ipfs swarm filters
/ip4/192.168.0.0/ipcidr/16
/ip4/198.18.0.0/ipcidr/15
/ip4/198.51.100.0/ipcidr/24
/ip4/203.0.113.0/ipcidr/24
/ip4/10.0.0.0/ipcidr/8
/ip4/172.16.0.0/ipcidr/12
/ip4/192.0.0.0/ipcidr/29
/ip4/192.0.0.170/ipcidr/32
/ip4/169.254.0.0/ipcidr/16
/ip4/192.0.0.0/ipcidr/24
/ip4/240.0.0.0/ipcidr/4
/ip4/100.64.0.0/ipcidr/10
/ip4/192.0.0.8/ipcidr/32
/ip4/192.0.0.171/ipcidr/32
/ip4/192.0.2.0/ipcidr/24
it would be really cool if my parsing for ipfs swarm filters
ignored entries starting with a #
that way you could comment your blocked addr list and still pipe it to the command
Until now neither feedback nor an alert from my hoster.
@Luzifer woot! lets keep it up
Still running, no complaints… I think the filters are working… Praise @whyrusleeping for building it!
i'm glad we've fixed that finally!
Had the same problem.
I'm not sure if this is a problem that pops up over and over. If it is, you could perhaps make a little note in the installation guide or disable local dialing in the default configuration. Afaik. there are approx. 250 nodes so I don't think it is that important at this stage.
Anyways, interesting and awesome project. Keep up the good work!
the path to improvement:
we could also add a warning to ipfs daemon
.
I've also been wanting an ipfs init --interactive
that asks users questions like:
enter peer ID keysize (2048):
bootstrap to public network (yes):
dial local network addresses (yes):
enable mdns service discovery (yes):
Solution
Use
ipfs init --profile=server
~ Kubuxu
I just installed go-ipfs, did an init, and started the daemon. A couple minutes later, my hosting provider sent me an abuse email indicating that a "Netscan" was coming from my host and asked me to stop. Here is the log they sent me (edited for privacy).
Notice all but 3 destination addresses are internal network destination. There are also many repeats (same destination internal IP) and this all happened in 33 seconds. Nearly all of this was happening on port 4001 as well, reinforcing that this was IPFS doing this.
How does ipfs currently find peers to swarm with? Is there a way to throttle back the peer discovery process? Why is it even trying to scan internal IPs? (I'm on a externally facing machine)