Open GinjaChris opened 7 years ago
URL scanner is bad wording on my part - should actually be a web crawler. Do we want to simply crawl a site and record all URL's/content, or would it be better to crawl it for certain useful pages (robots.txt, phpmyadmin, etc) and record all output? Bearing in mind that even 404 not found errors can reveal useful information on the target (for example running server type and version, potentially the presence of a WAF, etc)?
Should add a simple URL scanner that can read input from a text file. Add a basic example input file. Should be added to recon module. Needs to support both http and https.