Open bugbaba opened 5 years ago
I wonder if you could achieve this by using a specialized paths
file with your random string in it. Something like:
meg randomstring.txt
awk '{print $2}' out/index | unfurl -u domains > wildcards.txt
grep -v -f wildcards.txt hosts > tamehosts.txt
There's probably a way to do this in one unholy one-liner, but the gist of is it is to filter at the host level, not teach meg to identify wildcards. Maybe helpful?
Currently I am doing this in kind of similar way.
grep -l randomshit *gopaths | xargs -I '{}' mv '{}' falsepositves/
ls *gopaths | cut -d '~' -f1 |sed 's/:/:\/\//' | sed 's/$/\//' > megList.txt
So I am kind of using the gobuster + grep to detect wildcards, but still there are few hosts that manage to bypass both of the checks.
How are the hosts bypassing both checks? Can you share a request/response for one of the ones that slipped by?
Because of the way web works :)
The gobuster sends random paths and if the hosts don't respond with 200 OK
it assumes that is safe from wildcard.
and even the way I am trying to find it using randomshit.txt
with bunch of potential wildcard paths.
So it is possible to configure our webserver to serve same content on basis of file paths.
Example:
site.com
is the host we are testing
on sending site.com/16545623265-arahnsifk_rakl
by gobuster it returns 404 as there is no file by that name and passes the gobuster test.
Then we try requesting some possible wildcard dirs like
.htpasswdrandom
, .random.js
but since it doesn't have those file and neither it is configured to handle these paths/extension it returns 404 for those too and thereby passing this test too.
So we in confidence start testing it for a huge list of paths, But the server is configure to handle admin
keyword. for every path that contains admin in it, it servers the admin login portal.
Like
site.com/admin_centter
site.com/admin_1
site.com/admin_main
site.com/admin_portal
site.com/admin_secret
site.com/backend_admin
this is how some hosts are able to bypass both the test, I keep removing the false data from my results hence I don't have an example now.
Sorry if I'm being dense here, but I think the server returning 404 for random and non-existent paths is a red herring here. What matters is if it returns 200 to a random string. Whether you use Gobuster to generate that random string or just do meg /sdfsafsafsafsadflaskjfsdalfjsdaflaksjfas
shouldn't matter if you're filtering out the hosts that return 200.
If you have cases where the host is not returning 200 for random paths but somehow is still configured with wildcard, that's a new one to me.
As it stands, I think meg /sdfsafsafsafsadflaskjfsdalfjsdaflaksjfas
or Gobuster's built-in random check will get you the same results if you use them to modify the hosts input into meg when you're doing your full checks.
you are right on that part.
Hi Tom,
I think, there should be some measures to detect Wildcard URL's and basis on that stop sending request to those endpoints.
This doesn't impacts on result, But it will rather save time and resource for endpoints that return
200 Ok
for every thing.Personally I am not much familiar with golang :|
But this what I have tried in detecting
randomshit
is in url and server responds with 200 okCode compared to original repo
It does works in detecting and removing the domain from hosts slice value. But since the request worker is running in background and reads from hosts only initially, thus making it of no use.
This is how it looks like now
Even if you don't wish to implement this feature in meg. Still It would be a of great help, if you can give your inputs on this and on my logic of trying to solve this issue.
Regards, Bugbaba