nbs-system / naxsi

NAXSI is an open-source, high performance, low rules maintenance WAF for NGINX
GNU General Public License v3.0
4.8k stars 606 forks source link

Help with NAXSI rules for DoS prevention #389

Closed C0nw0nk closed 7 years ago

C0nw0nk commented 7 years ago

So I received allot of annoying requests to my servers lately that NAXSI definitely could solve.

The urls contain strings like this to bypass caches and flood / dos back end processes.

(switching between upper and lower case)

index.php
InDeX.php
INDEX.PHP

(inserting random junk data to bypass caches)

index.php?random=1&junk=fake
InDEX.PHP?RanDom=9000&junk=morefake

User Agent used

1'"2000

Other User agents used they increment the number by +1 each time etc.

What are some good rules to prevent this.

shel3over commented 7 years ago

short solution: try first to limit the request rate by IP, and you can do that using ngx_http_limit_req_module

C0nw0nk commented 7 years ago

That was already done and in place but there are many IP's it may aswell be thousands of IP's shell shock probing but they are not probing for SQL exploits or anything to hack the server with. They are just intentionally bypassing the caches to DoS the server down. (Slowloris like)

http {

limit_req_zone $binary_remote_addr zone=one:10m rate=1r/s;
limit_conn_zone $binary_remote_addr zone=addr:10m;

server {

location / {
limit_req zone=one burst=5;
limit_conn addr 1;
}

}
}

It is why some naxsi rules to prevent these fake and junk data requests would be useful.

As you see from the User-Agent I provided that is a obviously spoofed one that Naxsi could use a rule to detect spoofed / faked user agents that are too short or all numbers etc.

C0nw0nk commented 7 years ago

It also appears extensions on browsers like this do not help situations either.

https://addons.mozilla.org/en-GB/firefox/addon/random-agent-spoofer/?src=cb-dl-users

Inserting fake data into urls and spoofing headers etc.

I am not seeking to block spoofed headers that could be a real valid header or URL but when i receive requests like this that blatantly obviously garbage data and spoofed. These should be stopped.

Blatantly obvious a problematic request with intentions to bypass caches etc.

User-Agent : 12345
URL : /inDeX.php?Random=FAKEData&vars=123&vars2=456&etc=more-fake_garbage
jvoisin commented 7 years ago

You can write a rule that match on user-agent I guess, something like mainRule "mz:$HEADER:User-agent" "rx:[0-9\"]" "s:$DOS" "id:1337";

C0nw0nk commented 7 years ago

Thanks but wouldn't a rule like that be blocking any user-agent that contains numbers like for example a valid legit user-agent is as follows.

Mozilla/5.0 (compatible; Googlebot/2.1; +http://www.google.com/bot.html)

It needs to be a strict match for numbers or exact matching only i feel.

As for query strings thats just a nightmare really.

?rnd
?ran
?rand
?random

?1
?2
?3
?4

?abc
?efg

etc
etc
buixor commented 7 years ago

Hello, I think having a challenge (ie. js) will be more efficient than trying to solve it with naxsi, as you might run into a endless cat/mouse game :) something like https://github.com/kyprizel/testcookie-nginx-module might help !

C0nw0nk commented 7 years ago

It is already a cat and mouse game :(

Do you know if javascript challenge pages like this, The same as Cloudflare's Anti DDoS | IUAM (I'm Under Attack Mode) javascript challenge page coincidently. If these pages can allow search engine crawlers through I don't know how well Google, Bing, Baidu, DuckDuckGo etc these search engine crawlers and bots may not understand or solve javascript challenges ?

jvoisin commented 7 years ago

You can of course whitelist the search engines, since their ranges are public (or by user-agent, but it won't take long for the attacker to detect this I guess)

C0nw0nk commented 7 years ago

Yeah i would never whitelist the user-agent since it is easy for them to spoof and fake that.

The IP's I have no idea if there is a existing list anyone can share or is aware of that lists all legitimate search engines crawlers IPs to whitelist.