v3n0m-Scanner / V3n0M-Scanner

Popular Pentesting scanner in Python3.6 for SQLi/XSS/LFI/RFI and other Vulns
GNU General Public License v3.0
1.44k stars 409 forks source link

how to scan multipie domains #57

Closed h1tman closed 7 years ago

h1tman commented 7 years ago

im having issue how do i scan for mutlpie domains instead of just using .com how would i use .com,.net,.org to filter these? how does one scan for 3 diffrent domains at once? instead of one domain? and is there feature so we dont have to set domain?so it will grab all tld?

d4op commented 7 years ago

Thought you can seperate each TLD with a comma ,

Like net,com,org

From: h1tman [mailto:notifications@github.com] Sent: Samstag, 24. September 2016 16:56 To: v3n0m-Scanner/V3n0M-Scanner V3n0M-Scanner@noreply.github.com Subject: [v3n0m-Scanner/V3n0M-Scanner] how to scan multipie domains (#57)

im having issue how do i scan for mutlpie domains instead of just using .com how would i use .com,.net,.org to filter these? how does one scan for 3 diffrent domains at once? instead of one domain? and is there feature so we dont have to set domain?so it will grab all tld?

— You are receiving this because you are subscribed to this thread. Reply to this email directly, view it on GitHubhttps://github.com/v3n0m-Scanner/V3n0M-Scanner/issues/57, or mute the threadhttps://github.com/notifications/unsubscribe-auth/ALLcyrTRIh75WYIZ1kHuHnaIxXx-jUtks5qtTnzgaJpZM4KFqQ.

NovaCygni commented 7 years ago

.com .org .biz simply space between domains should work though: .org%20.com%20.biz would also work.

NovaCygni commented 7 years ago

For clarification the "Domain Selected" is basically just "InUrl:" so .com becomes "inurl:.com" For all*.* should work for your purposes.

d4op commented 7 years ago

You split the input string in array elements and start separate qrys right ? Not doing the qry like site:org%20net%20org right ?

Second isnt working on bing.

From: NovaCygni [mailto:notifications@github.com] Sent: Samstag, 24. September 2016 17:02 To: v3n0m-Scanner/V3n0M-Scanner V3n0M-Scanner@noreply.github.com Cc: d4op maximilian.kretschmer@outlook.com; Comment comment@noreply.github.com Subject: Re: [v3n0m-Scanner/V3n0M-Scanner] how to scan multipie domains (#57)

.com .org .biz simply space between domains should work though: .org%20.com%20.biz would also work.

— You are receiving this because you commented. Reply to this email directly, view it on GitHubhttps://github.com/v3n0m-Scanner/V3n0M-Scanner/issues/57#issuecomment-249369133, or mute the threadhttps://github.com/notifications/unsubscribe-auth/ALLcyvwQlCXVIiO29hNZZDi2lEfHZdziks5qtTtkgaJpZM4KFqQ_.

d4op commented 7 years ago

Why arent you working with site:com ?

From: NovaCygni [mailto:notifications@github.com] Sent: Samstag, 24. September 2016 17:03 To: v3n0m-Scanner/V3n0M-Scanner V3n0M-Scanner@noreply.github.com Cc: d4op maximilian.kretschmer@outlook.com; Comment comment@noreply.github.com Subject: Re: [v3n0m-Scanner/V3n0M-Scanner] how to scan multipie domains (#57)

For clarification the "Domain Selected" is basically just "InUrl:" so .com becomes "inurl:.com"

— You are receiving this because you commented. Reply to this email directly, view it on GitHubhttps://github.com/v3n0m-Scanner/V3n0M-Scanner/issues/57#issuecomment-249369186, or mute the threadhttps://github.com/notifications/unsubscribe-auth/ALLcyiyaz70aIYcTBtXy8IeWKQXLKgpks5qtTutgaJpZM4KFqQ.

NovaCygni commented 7 years ago

*.com *.co.uk appears to work find for me :/ Edit Correction throws looped bing responses, Ive a week of from monday so I guess Ill leave this open as a issue to be fixed. @d4op site: is the method used ^_^ which basically is just a better "InUrl:" for the purpose we're using it for.

h1tman commented 7 years ago

i tried: .com .co.uk

then tried this: .org%20.com%20.biz this one only brought me .org results.... im confused when i target 1 domain it works fine when i try to grab .com .uk .net it will only get one of them

d4op commented 7 years ago

May start 3 instances of the python script ;) Each tld one instance ☺

From: h1tman [mailto:notifications@github.com] Sent: Samstag, 24. September 2016 17:13 To: v3n0m-Scanner/V3n0M-Scanner V3n0M-Scanner@noreply.github.com Cc: d4op maximilian.kretschmer@outlook.com; Mention mention@noreply.github.com Subject: Re: [v3n0m-Scanner/V3n0M-Scanner] how to scan multipie domains (#57)

i tried: .com .co.uk

then tried this: .org%20.com%20.biz this one only brought me .org results.... im confused when i target 1 domain it works fine when i try to grab .com .uk .net it will only get one of them

— You are receiving this because you were mentioned. Reply to this email directly, view it on GitHubhttps://github.com/v3n0m-Scanner/V3n0M-Scanner/issues/57#issuecomment-249369692, or mute the threadhttps://github.com/notifications/unsubscribe-auth/ALLcyssYcrymej2nouvkcp41ftHTuvLks5qtT4DgaJpZM4KFqQ.

NovaCygni commented 7 years ago

Solution: .com OR site:.co.uk

Seems Bing Requires the OR operand.

h1tman commented 7 years ago

so i have to scan 1 separate domain for now? as seems there no solution... its just when im doing scan i get 400k sites... and once it takes the .com sites only have around 20k in the list.. so i want to grab few domains at once

NovaCygni commented 7 years ago

Solution is to put "OR" between the domains and add site: to each additional domain, also I strongly encourage the use of '' when selecting domains to avoid the issue you just mentioned, by using *.com OR site:.co.uk you should have removed the non compliant sites.

d4op commented 7 years ago

Or hotpatch the code for your needs….

sites = input("\nChoose your target(domain) ie .com , to attempt to force the domain restriction use , ie .com : ") sitearray = [sites]

replace sitearray with = [“.com”,”.net”,”.uk”,”.org”]

if asking for input domain let it empty and press enter.

From: NovaCygni [mailto:notifications@github.com] Sent: Samstag, 24. September 2016 17:29 To: v3n0m-Scanner/V3n0M-Scanner V3n0M-Scanner@noreply.github.com Cc: d4op maximilian.kretschmer@outlook.com; Mention mention@noreply.github.com Subject: Re: [v3n0m-Scanner/V3n0M-Scanner] how to scan multipie domains (#57)

Solution is to put "OR" between the domains, also I strongly encourage the use of '' when selecting domains to avoid the issue you just mentioned, by using .com OR *.co.uk you should have removed the non compliant sites.

— You are receiving this because you were mentioned. Reply to this email directly, view it on GitHubhttps://github.com/v3n0m-Scanner/V3n0M-Scanner/issues/57#issuecomment-249370557, or mute the threadhttps://github.com/notifications/unsubscribe-auth/ALLcyuCqFpEsrjIaeKsC2gP--mio7w7Lks5qtUHUgaJpZM4KFqQ_.

h1tman commented 7 years ago

Domain: <.usa> Has been targeted | Collected urls: 36526 Since start of scan | D0rks: 2333/2333 Progressed so far | Percent Done: 100 | Current page no.: <80> in Cycles of 10 Page results pulled in Asyncio | Dork In Progress: .aspx?pid= | Elapsed Time: 2:37:55

[+] URLS (unsorted): 36526 [+] URLS (sorted) : 2

i inputed: .usa as i belive thats the correct way very weird... how can i save the full list because the sorted only brought 2 result never done that before... usually i could get 400k results and only 30k sorted.... im not sure there seems to be few bugs

d4op commented 7 years ago

May you started to much threads and bing is blocking you so its sorted only 2 urls. Write in the code some print of current url. Like debug info and look

From: h1tman [mailto:notifications@github.com] Sent: Samstag, 24. September 2016 20:15 To: v3n0m-Scanner/V3n0M-Scanner V3n0M-Scanner@noreply.github.com Cc: d4op maximilian.kretschmer@outlook.com; Mention mention@noreply.github.com Subject: Re: [v3n0m-Scanner/V3n0M-Scanner] how to scan multipie domains (#57)

Domain: <.usa> Has been targeted | Collected urls: 36526 Since start of scan | D0rks: 2333/2333 Progressed so far | Percent Done: 100 | Current page no.: <80> in Cycles of 10 Page results pulled in Asyncio | Dork In Progress: .aspx?pid= | Elapsed Time: 2:37:55

[+] URLS (unsorted): 36526 [+] URLS (sorted) : 2

very weird... how can i save the full list because the sorted only brought 2 result never done that before... usually i could get 400k results and only 30k sorted.... im not sure there seems to be few bugs

— You are receiving this because you were mentioned. Reply to this email directly, view it on GitHubhttps://github.com/v3n0m-Scanner/V3n0M-Scanner/issues/57#issuecomment-249379329, or mute the threadhttps://github.com/notifications/unsubscribe-auth/ALLcyis0x06glz-Elj2i0-ynLdrMVqgbks5qtWi0gaJpZM4KFqQ_.

h1tman commented 7 years ago

| Domain: <.ca> Has been targeted | Collected urls: 388069 Since start of scan | D0rks: 2333/2333 Progressed so far | Percent Done: 100 | Current page no.: <70> in Cycles of 10 Page results pulled in Asyncio | Dork In Progress: .asp?lang=ca | Elapsed Time: 2:49:47

[+] URLS (unsorted): 388069 [+] URLS (sorted) : 6707

i dont understand why the urls sorted are so low? isnt there place were unsorted can be found/saved?

NovaCygni commented 7 years ago

Things removed: 1) Duplicate sites 2) A small range of domains/keyword-containing-sites, @d4op Really I should add a "dbg.log" file with the Threads and other settings people used so I can filter things like this.

Also, you shouldnt trigger a bing block unless your connection is deemed as suspicious, what settings for threads and so forth are you selecting?

h1tman commented 7 years ago

hello i been using 50 threads.... i left it running last night random domains... in total i scraped 1.5million urls... have guess how many sites i got in return 40k!! something seriously wrong with this scanner....

to be fair there is some serious issue with this dorker.. my dorks are my own what i made.... i input 10k dorks get 1.5million site and 40k urls... each dork i use is unique

in total i have scraped 2 millions sites and only got 70k valid links back... thats really bad and some serious issue.. if there any php coders here... i would pay you to make me one to connect to bing.com and extract sites in php thanks

NovaCygni commented 7 years ago

Every dork public/wild in existence is in the d0rk list, each is unique or with slight variations, "Dorks what I made", id be curious as to how you generated these d0rks? especially as you claim 10k d0rks, as that would take, roughly, 6 months by hand to generate and validate them... I know, ive been there!

As stated before, sites removed are either duplicates, same domain, or on the "Filter-List" of sites not to handle (* .Gov sites, .Mil and so forth *).

Considering the closest rival to v3n0m returns hundreds, at most thousands of valid urls in a 5 hour period where v3n0ms output is far higher, thats hardly a issue, if you want simply recode the 3 lines it would take to have it keep ALL the duplicates, same domains (* mail.example.com example.com cpanel.example.com *) and feel free to wait the extra 50 hours processing all the additional non-value results it will give you.

V3n0M is for harvesting enmass the Vuln sites, with numbers relating to ACTUAL hits, I couldve easily removed a large chunk of restrictions and have the d0rker throwing the endless false positives that other scanners do but I figured it would be more logical to have the scanner remove things known to cause false positives or not to be vuln. Would you be happier if from 2million urls you got 1.2million "Sorted", and then had to wait the extra time for 1Million Not Vuln links to be checked, or would you prefer 2Million urls with 100k Sorted of which 80% are positives?

PS> The "Checking Vuln URLs" process, is seriously limited by your connection speed, itll pull in 4Mbs easy, if it stalls or starts skipping results there due to Lag/Time-Out, thats your connection issue. As it stands, V3n0M with a decent connection is still far superior to the nearest rival d0rker.

h1tman commented 7 years ago

hello thanks for the reply,yes you are right about the d0rk list... but Ive been playing with dorks/sqli since darkc0de python scripts.yes i been there few years ago making dorks by hand its painful! i basically have program what has 4 list boxes to put them all into one dork example: list1: index|page|video|view|forum| list2: php|asp|aspx|jsp|cfm| list3: id=|cat=|login=||list4:site:uk|site:net result: index.php?id=site:uk and i have another program were you import your urls and it will copy and build dorks from selected sites to build your own dork system as well/ once i add say 100 keywords...... i can get 10k dorks from it it will generate them, i actually gave the scanner good test on 5x roots with kali on them i actually only wanted to use v3n0m to bulid list of scraped sites.. i use other software for checking the vulns/exploiting.. as i say i ran this on few vps with over 200 meg per second i just did another scan .com 400k sites then after did it process it said 12k valid.... how is that normal? when im using more dorks then the results im getting and asking for 30 pages dork! thanks..ok well im doing 1 more test with this v3n0m to make up my mind, i have made list of 40k dorks with 50 threads and 30 pages i will paste the results after and guarantee it will say it has scraped 2m + sites and will only give 20-30k valid... and this time i have no filter so it will gather all TLDS.... this shud really atleast minum give me lets say 100-400k valid links..but it wont previously when i used other scanners what are outdated... from lets say 300k sites scraped i would atleast get 30k vuln... never mind 30k sites in total from valid scrape

| Domain: <> Has been targeted
 | Collected urls: 142155 Since start of scan
 | D0rks: 668/39002 Progressed so far
 | Percent Done: 1
 | Current page no.: <40> in Cycles of 10 Page results pulled in Asyncio
 | Dork In Progress: aspx?supplier= 372
 | Elapsed Time: 0:47:42

|----------------------------------------------------------------|
| Release Date 18/08/2016                                        |
|                                                                |
|        Proxy Enabled  [ False ]                                |
|                                                                |
|                    _____       _____                           |
NovaCygni commented 7 years ago

Ah a seed generator, yeh I looked into making a module for Seed generation from keywords but figured there was no demand for it. Well you have a number of factors that could be kicking in, the largest one being bing itself which sadly likes to often repeat "Results" from the same url... far more than yahoo or other search engines do, but until a workable captcha bypass method is implemented I cant re-enable the other search engines. Its the reason on the description page the amount of search engines was replaced with ~ :+1:

changing the search engine is easy but the problem is the "Suspicious traffic" Captcha walls after a very short time of the scan starting.

h1tman commented 7 years ago

ok not problem,so your trying to find way to bypass the search engine,you need to use somebypass or just use socks proxys in list format not just single

| Domain: <> Has been targeted | Collected urls: 358749 Since start of scan | D0rks: 1660/39002 Progressed so far | Percent Done: 4 | Current page no.: <10> in Cycles of 10 Page results pulled in Asyncio | Dork In Progress: .php?cid= 129 | Elapsed Time: 1:51:34

right i have to sleep tomorro will post my final test :) and if we do get banned.. why doesnt your scanner output that and quit the session.. instead of running in loop grabbing garbage,i play around alot with perl bots which have good scanners in them they use php bypass for google/bing h1tman@jabber.ru

NovaCygni commented 7 years ago

Socks proxy list would need to be self-updating to be current or would need someone to maintain a "working list" so people can update it from within the program, cycled proxies would need to be at least Anonymous and not detected as free proxies by the search engine to work.