Open ocervell opened 8 months ago
hi @ocervell. Some of the things you mention can already be done. Some examples:
--maxRequestsPerMinute
argument; this will avoid the timeouts you mention--skipFuzzers
argument; you can provide a comma separated list of fuzzers to be excluded--ignoreXXX
arguments; this will allow you to ignore specific errors returned by the service, or specific response codes. Ignoring will mean they will be reported as success and included in the report, but you can skip reporting for them using --sri
Accept
headers should mean that it's fine; maybe the app behaves equally badly for all Accept
headers, so I would rely on the --ingoreXXX
mentioned aboveHappyFuzzer
; if the requests will need additional context (like some entities needed to be created) you can supply static/reference data using the --refData
argument; this is a great way to provided additional context to make some fields static in order for the requests to meet business constraintsIt's a good idea to extend the User-Agent
header with additional context. I'll add that to the backlog.
I would typically recommend a first round with all fuzzers in blackbox mode: cats ... -b -k
which will only report 500
. It just needs the contract and authorization headers.
After, you can play with the --ingoreXXX
arguments --matchXXX
arguments and different filtering arguments.
The User-Agent
header is enhanced in the latest release: https://github.com/Endava/cats/releases/tag/cats-11.3.0
cats
is good when cherry-picking a fuzzer and running it on one endpoint.However, today I'm scanning an API for a customer, ran on all endpoints using:
cats -c open_api.yml -s https://<API_URL> --proxyHost 127.0.0.1 --proxyPort 8080 -H "Authorization=Basic <TOKEN>" --checkHeaders --refData refs.yml --iu
I'm still getting more than 3k errors, which makes it difficult to identify what to look at in priority. Some of them are timeouts due to the app not handling as many requests, others don't mean much (for instance,
ExtraHeaders
fuzzer when the app doesn't even process them will result in errors, but they don't mean anything - but there are hundreds of examples like this).The way I workaround this at the moment is to run one fuzzer at a time, but this defeats a bit the purpose of running
cats
somehow (ideally we want to do a full run, then pickle on the vulns we're interested in, and then re-run with a different set of inputs).Proposal for improvements:
It would be helpful to match different type of fuzzers with known vulnerabilities, attack types, or just an explanation of how it could be used to exploit, so that we could sort them in the UI and prioritize some of them.
It would also be helpful if
cats
could help us ignore some errors, for instance if fuzzing theAccept
header result in the wrong expected error code, but that code is the same no matter what theAccept
header is: it could be considered that the app ignores it altogether, meaning a 'normal behavior'Have a way to have fuzzer info in the requests with the fuzzer that made them, for instance by tweaking the User-Agent header: could be something like
cats/<version> (<FUZZER_NAME> <EXPECTED_CODE>)
. This could allow for instance linkingcats
with Burp proxy and doing the analysis by simply looking at the info in the request to link it back to the actual test.Have a mode where we can make the "good" request (without tampering) to check if the code is "good" (i.e expected in the conf). There are a lot of times where even the normal request will fail (wrong or bad data replaces, wrong authorization header ...), so it would be nice to be able to detect it and flag it somehow instead of making all fuzzed requests output failures.