v3n0m-Scanner / V3n0M-Scanner

Popular Pentesting scanner in Python3.6 for SQLi/XSS/LFI/RFI and other Vulns
GNU General Public License v3.0
1.46k stars 408 forks source link

Search Engine Issues #11

Closed NovaCygni closed 8 years ago

NovaCygni commented 8 years ago

Only last called SE is doing the checks, AND seems not to produce the result expected, look into better nesting of SEs so each dork is run against each SE and THEN += 1 the increment for next d0rk.

NovaCygni commented 8 years ago

SEs will be updated for 4.0.2. Got a awesome method ive thought of to run each SE's checks as individual Asynchronous processes, though may still yet use Twister or another method, whichever works the way intended and provides the speed gain im after.

d4op commented 8 years ago

i would use scrapy in async mode.

NovaCygni commented 8 years ago

Yeh ive been looking at that as a option along with using Tornado and Asyncio, alas on my own theres a ton of work still to be done and this aspect is currently taking a backseat as I wish to improve the FTP Crawler first so the Metasploit Vulns are matched to detected headers... When thats done my next task is to nerf the 18.5k d0rk list down to 7.8k of SQLi dorks, 2.2k CCTV Dorks (* with a potential avenue for CCTV attacks and so forth *) along with a "Shell Sweeper" to look for known shells.... If your willing to help out on any aspect of the program feel free to join me :)

d4op commented 8 years ago

Jeah i will try to improve some stuff and make it more smooth.

d4op commented 8 years ago

some interesting links: https://github.com/danmcinerney/xsscrapy https://github.com/titantse/seCrawler

NovaCygni commented 8 years ago

Btw the long elif chain was intentional I had planned to add checks for known exploits against x target, in the end I got distracted by other projects and never got round to it yet, The search engines were removed down to 1 atm as I was playing around with yahoos flaws in blocking suspicious traffic, it appears upon us.search.yahoo blocking me you can simply change the "us" part for gb, fr, de or whatever and itll work. Cheers for the seCrawler link gonna pull it to Pycharm now and give it a peek.

d4op commented 8 years ago

before i forget: http://blog.florian-hopf.de/2014/07/scrapy-and-elasticsearch.html some scrapy stuff for piplining the results to db / elasticsearch, may use @ kibana for smooth web ui. like this: https://raw.githubusercontent.com/TiNico22/ELKconfig/master/CISCO-ASA/Dashboard-ASA.png

NovaCygni commented 8 years ago

d4op Kibana+Marvell looks awesome... easy enough to implement and pythonise aswell.

d4op commented 8 years ago

NovaCygni jeah looks really nice, may give it a shot. already thought a bit about your TLD.yahoo.com idea. you are generating a sitesarray, may push more target TLDs to the sitearray and do it like for site in sitesarray => http:// + site + .yahoo.com :p so the results will be better and its rotating.

NovaCygni commented 8 years ago

After looking at yahoo and comparing results im not sure if its doing filtering of the results itself (* Removing some of the results and replacing with domains sponsored version of the link *), bing compared to yahoo results in 20% more "Hits" with 20% less d0rks used!, depending on how I can get ES to use other engines it wont matter anyway, sadly my TKinter learning began only a month ago so the Gui interface will be fugly.... really fugly!. Cant thank you enough though think ima going to love ES and Marvel :)

NovaCygni commented 8 years ago

Issues been nailed, first stage of implementing Asyncio is complete already pulling bulk pages of requests in one go, vastly improved speed, will use same method to run checks with other search engines as more 'future' processes.