XaF / fail2ban-subnets

fail2ban-subnets aims to provide a way to ban subnets of IPs repeatingly banned by fail2ban for multiple offenses.
11 stars 3 forks source link

Optimize subnets #4

Open dragan-m3connect opened 6 years ago

dragan-m3connect commented 6 years ago

Hi, I saw that subnets are not optimized because the IP ranges are in 24 subnets.

Example:

Chain fail2ban-subnets (1 references) pkts bytes target prot opt in out source destination 0 0 DROP all -- 89.204.155.128/25 0.0.0.0/0 0 0 DROP all -- 195.154.182.0/24 0.0.0.0/0 0 0 DROP all -- 89.204.154.0/24 0.0.0.0/0 0 0 DROP all -- 195.154.183.0/24 0.0.0.0/0 33 24007 DROP all -- 89.204.153.0/24 0.0.0.0/0 571K 122M RETURN all -- 0.0.0.0/0 0.0.0.0/0

I expected to have 16 subnets for 89.204 and 195.154 but that is not a case. Any idea how to force 16 subnets?

XaF commented 6 years ago

It's currently not possible, the maximum you can get to is /24 (ip blocks are computed for xxx.xxx.xxx.*) Adding more range to get to bigger subnets can be interesting, but I'd need to be more cautious about what to ban and when to ban, as you probably don't want it reaching a really large subnet without a consequent number of IPs that have been flagged in this subnet.

cepheid666 commented 5 years ago

Back in the fail2ban thread on this topic, @toreit wrote a script that gets the IP subnet from whois info: https://github.com/fail2ban/fail2ban/issues/927#issuecomment-307725712

I wonder, would it be possible to integrate their script into fail2ban-subnets? For example, run the calculation currently run for the smallest CIDR, but in parallel run that script... if it returns a result, then use that instead? (The reason to do both is because sometimes these guys will tarpit their whois info, so if that times out then one still wants the ban to work on the smaller CIDR.)

What do you think?

XaF commented 5 years ago

That can be a good idea! I will note that in the things I might get the chance to work on when I'll have a few days off :) Thanks for the link!

alexanderperlis commented 5 years ago

It's currently not possible, the maximum you can get to is /24 (ip blocks are computed for xxx.xxx.xxx.*)

What about letting the block size be a configurable option? For example, BLOCKSIZE=24. Instead of hashing IPs into blocks via their triple-octet prefix "xxx.xxx.xxx", just hash them by the portion of their bit pattern corresponding to BLOCKSIZE. The line ipb = '.'.join(ip.split('.')[:3]) just has to become something like: ipb = ip_to_int(*[int(chk) for chk in ip.split('.')]) & (0xFFFFFFFF<<(32-BLOCKSIZE))

With this modification, an admin can choose to go more conservative by reducing the netmask to say BLOCKSIZE=26 to ensure at most 64 addresses get blocked at a time, or perhaps they are comfortable with the risk of setting BLOCKSIZE=23 and possibly blocking 512 addresses at a time!

Adding more range to get to bigger subnets can be interesting, but I'd need to be more cautious [...]

Making it a configurable option (with a default of BLOCKSIZE=24) puts the responsibility on the admin rather than on the code author.

timjaknz commented 4 years ago

I notice a lot of attempts from bigger subnets now than /24. Changing the line ipb = '.'.join(ip.split('.')[:3]) to ipb = '.'.join(ip.split('.')[:2]) is a simple change that extends it to /16. Hardcoded I know but quick to implement and works for me well.