Open niklasfemerstrand opened 8 years ago
Regarding this issue:
The final thing that troubles me is that despite logging in via HTTPS the crawler happily sends issued cookies over HTTP for other requests and does not flag this as a vulnerability even with the grep.analyze_cookies plugin activated.
Did you try to set the target to https? Change:
To:
The bold part makes it sound like the authentication requests will be the first requests performed. I find that instead it starts with the target URL exactly as specified and in my case testing with WordPress it doesn't perform the authentication until request 19.
I believe those initial requests are the ones sent by 404 detection to identify those responses. Nothing to worry about nor change there. It shouldn't affect the login process.
which the website erroneously calls followRegex
After examining the source code of auth.detailed I cannot think of any section of it which is intended to append discovered links to the crawl queue. It seems this was never implemented or implemented somewhere else, and now broken.
Authentication plugins are NOT meant to yield any URLs to the core, they just keep the session active.
I believe that the problem you're having is that your target is http and your auth settings are https
. Fixing that might help.
I have found authentication plugin behaviour to be broken.
According to the documentation: "Authentication plugins are a special type of plugin which is responsible to keep a session alive during the whole scan. These plugins are called before starting the scan (in order to get a fresh session) and once every 5 seconds while the scan is running (to verify if the current session is still alive and create a new one if needed)."
The bold part makes it sound like the authentication requests will be the first requests performed. I find that instead it starts with the target URL exactly as specified and in my case testing with WordPress it doesn't perform the authentication until request 19.
After request 19 the plugin outputs "Login success for username/password", so all seems well as far as logging in goes.
Continuing through the documentation: "Configure the crawl.web_spider plugin to ignore the logout link. This is important since we want to keep the session alive for the duration of the scan." It is clear that web_spider should automatically crawl links discovered via authentication given that there is a follow_link regex matching the links. However, I find that web_spider doesn't follow a single link discovered on pages hit after authorization despite the configured .* follow_regex (which the website erroneously calls followRegex). After examining the source code of auth.detailed I cannot think of any section of it which is intended to append discovered links to the crawl queue. It seems this was never implemented or implemented somewhere else, and now broken.
The final thing that troubles me is that despite logging in via HTTPS the crawler happily sends issued cookies over HTTP for other requests and does not flag this as a vulnerability even with the grep.analyze_cookies plugin activated.
Below is the script I use for testing against an installation with the latest WordPress version. I would be thankful if you could have a look and help me resolve the problems.