Closed vncloudsco closed 10 months ago
if you want to block any file containing a string like foo
you can use RAW_BODY.
Example here: https://github.com/nbs-system/naxsi/blob/master/t/24rawbody.t#L236
@wargio Is there a document on this? It doesn't seem to be what I need it to be. Will I have to list all files into one using rules? As such, I see no optimization. I think you are not understanding what I mean. this sample?
MainRule id:4241 s:DROP "rx:/etc/httpd/xxxx|/dev/abc|/home/user/1" mz:RAW_BODY;
@wargio I don't need to block string in file. in want edit match-pattern. ref: https://github.com/nbs-system/naxsi/wiki/rules-bnf#match-pattern
i'm not sure what you want to achieve. Match Patterns are just the definition of which method can be used to match a behavior that you want naxsi to block/log/allow. In order to add or remove any of them, you need to touch the C code.
That said, if you want to block a path on your website, yes you need to add 1000 rules, unless they are in the same place, then you can just use wildcards, etc..
Internally these rules will be matched in a lazy way, so the impact should not be noticeable.
The modsecurity feature you have mentioned works in the same way, the difference is that you place the path in one file, here you write a rule.
My suggestion is to write a rule per path with str:
pattern which is the most performant way.
I don't think it really works. because the files do not have in common, or have but in my URL there is also a field that leads to duplication. If I use a whitelist, rules become meaningless. eg: my url: domain/home
I need them to block links demo. domain/home/?xyz=/home/abc domain/home/?xyz=/home/cde domain/home/?xyz=/home/ttt domain/home/.git domain/home/.snv .........
If so, I have to write a rule for every link? This is not really efficient and optimal.
Oh, you have a vulnerability on xyz=
parameter? just specify the regex on args by allowing only requests that contains some chars or denying those who contains /
@wargio This is for example. it can be anything. How can an example be applied? eg: domain/home/?xyz=/home/abc domain/home/?ytyty=/home/cde domain/home/?uiuiu=/home/ttt
you can set a rule only for values.
MainRule "str:/" "msg: block any path in arg value" "mz:$URL:/home|ARGS" "s:DROP" id:12345;
you can use the include directive to do this dynamically, by having a file with only rules that gets read by NGINX
as I said above it's not really a good thing. If I have a list like this I have to sit down and write 17 rules for them. This is actually quite annoying and takes too long.
/.adSensepostnottherenonobook
/<invalid>hello.html
/actSensepostnottherenonotive
/acunetix-wvs-test-for-some-inexistent-file
/antidisestablishmentarianism
/appscan_fingerprint/mac_address
/arachni-
/cybercop
/nessus_is_probing_you_
/nessustest
/netsparker-
/rfiinc.txt
/thereisnowaythat-you-canbethere
/w3af/remotefileinclude.html
appscan_fingerprint
w00tw00t.at.ISC.SANS.DFind
w00tw00t.at.blackhats.romanian.anti-sec
i think feature sample pmFromFile is a necessity
Another example where I need to block these agents. getting them into a rule is quite a hassle, Or I will have to write hundreds of rules for them. not Optimal https://gist.github.com/vncloudsco/65cdacfe91d43f1f4eabd85c30856d87
Or I want to check the raw body of these characters, what should I do? Put them in all 1 rules? Or sit down and write hundreds of rules for them?
__halt_compiler
apache_child_terminate
base64_decode
bzdecompress
call_user_func
call_user_func_array
call_user_method
call_user_method_array
convert_uudecode
file_get_contents
file_put_contents
fsockopen
gzdecode
gzinflate
gzuncompress
include_once
invokeargs
pcntl_exec
pcntl_fork
pfsockopen
posix_getcwd
posix_getpwuid
posix_getuid
posix_uname
ReflectionFunction
require_once
shell_exec
str_rot13
sys_get_temp_dir
wp_remote_fopen
wp_remote_get
wp_remote_head
wp_remote_post
wp_remote_request
wp_safe_remote_get
wp_safe_remote_head
wp_safe_remote_post
wp_safe_remote_request
zlib_decode
actually yes to all of them. you need to write one line for each. but to be honest, some of them can be easily expressed by one rule in a regex. If we implement something like that, in memory it would be like implementing N rules.
Honestly that's a terrible thing. because the rules for the system need to be updated and maintained regularly. The upgrades will become more troublesome. I think in the future you will develop this feature to help the admin to work really simpler with rules.
that's why internally you need tests to verify that rules works as expected. A too simple way of configuring things, makes writing rules too limiting.
that's why internally you need tests to verify that rules works as expected.
I don't think so. since testing has nothing to do with this. because where you get it from, the result is the same.
A too simple way of configuring things, makes writing rules too limiting.
I do not understand what your limitation is?
the limitation is how you define the rule. let's say that we could implement what you ask for, but it has to be compatible with the current way of writing rules:
MainRule "pmFromFile:/path/to/my/list/of/paths.txt" "msg: block paths" "mz:URL" "s:DROP" id:12345;
This would potentially block any URL define into /path/to/my/list/of/paths.txt
, but what if you want to have a list that blocks somethiing more complex? you would duplicate that line and have 2 rules pointing to the same file but you would also dup the way of detecting them. i see so many issues with a similar approach. then what if i want to block a specific thing into my request that can be found in multiple paths? should i then implement also
MainRule "str:this is an exploit" "msg: block explot in paths" "mz:URL_pmFromFile:/path/to/my/list/of/paths.txt" "s:DROP" id:12345;
As you can see this becomes an issue.
My suggestion is to write a simple script that allows you to generate a set of rules that you read via the include
functionality of NGINX.
this will save you time and from the code prospective, both options above would have been converted into N lines of rules in memory at run time.
@wargio If you want to optimize the msg zone, I find it unnecessary because there is a request in the log.
eg: full log
2021/03/31 08:43:06 [error] 30265#30265: *340465 NAXSI_FMT: ip=xxx.xxx.xxx.xxx&server=xxx.xxx.xxx.xxx&uri=/.git/config&learning=0&vers=0.56&total_processed=10331&total_blocked=161&block=1&cscore0=$ATTACK&score0=8&zone0=URL&id0=42000329&var_name0=, client: xxx.xxx.xxx.xxx, server: gianghochimong.playfun.vn, request: "GET /.git/config HTTP/1.1", host: "xxx.xxx.xxx.xx", referrer: "https://xxx.xxx.xxx.xxx/.git/config"
please open a new issue on this.
please open a new issue on this.
I think this is not necessary. I think the development team should reconsider and develop a feature similar to it. This is very useful for system operators. naxsi has too few ways to set it up Match Pattern zone. This can be annoying when the user needs to write grouped rules
@blotus Can you help me with this problem?
I want to ask what types of naxsi search string available besides str and rx? I have about 1000 files that need to block direct access. How can I write rules for all those files?. with software modsecurity I can use the option pmFromFile on Operators, I just have to list those files into one file. naxsii Is there any way to do the same?
eg rules modsecurity:
Reference: https://docs.huihoo.com/modsecurity/2.5.7/operators.html