initstring / cloud_enum

Multi-cloud OSINT tool. Enumerate public resources in AWS, Azure, and Google Cloud.
MIT License
1.54k stars 224 forks source link

/foorbar the URL of S3 to make it properly detect #72

Open nrathaus opened 3 weeks ago

nrathaus commented 3 weeks ago

Currently S3 detection is not working due to missing path in the URL

This patch adds a fake path so that S3 detection works

initstring commented 3 weeks ago

Thanks for this @nrathaus!

Must be new behavior, I wonder when that was implemented.

Does your fix still support the bucket listings when an open bucket is found?

nrathaus commented 3 weeks ago

@initstring - it seems to be a rolling change - it now doesn't work when you try /foobar, it worked an hour ago

I don't know what is going on...

I can't find at the moment a way to detect S3 :(

nrathaus commented 3 weeks ago

I think there is some sort of rate limit / blocking - I have switch on and off the VPN and now it seems that S3 detection with the /foobar in place works (without doesn't)

When you hit the rate limit, everything returns non-existing - even completely valid URLs

initstring commented 3 weeks ago

Thanks for your work to troubleshoot this, @nrathaus!

If you (or anyone else reading) find a solution, please check back! Unfortunately, I probably won't have time to troubleshoot this myself soon. Sorry about that, things are just pretty busy at work/home right now.

Zoudo commented 3 weeks ago

I think there is some sort of rate limit / blocking - I have switch on and off the VPN and now it seems that S3 detection with the /foobar in place works (without doesn't)

When you hit the rate limit, everything returns non-existing - even completely valid URLs

Can we put increased timer on the check for AWS buckets, to bypass the rate limiting, do we know which rate limits AWS starts blocking the checks?

Zoudo commented 3 weeks ago

@initstring , do we have the changes from @nrathaus merged into the main yet?

nrathaus commented 3 weeks ago

@Zoudo The fix isn't 100% accurate, it works sometimes - as there is some sort of rate limit - once you hit it, everything will return NoBucket - even valid buckets

The fix I believe is best at the moment, is to do False Positive and False Negative testing every few requests, but that would require some sort of valid S3 to be used - not sure if this is legal to do - i.e. hardcoding a well known S3 bucket into the code

nrathaus commented 3 weeks ago

I did some investigation with my AWS setup, from what I see, when public access has been completely blocked - NoSuchBucket will always return

nrathaus commented 3 weeks ago

I think current design implementation of S3, prevent detection of unknown S3 via keywords - at least this what I think

Zoudo commented 2 weeks ago

I think current design implementation of S3, prevent detection of unknown S3 via keywords - at least this what I think

Thanks @nrathaus , does this mean if the result is empty, it means there are no buckets with public access. It means they are protected.

i wonder if this changes if we authenticate before the scans with key words.

nrathaus commented 2 weeks ago

At the moment even valid s3 return as non existing when you hit the rate - which appears to me to happen within 2-3 requests to none existing buckets with no paths

And a bit later to existing buckets with an invalid path

The only way to know this happened is to hold a valid s3 bucket and path at hand and see when it stops working

As it stands at the moment I think this feature is no longer feasible unless something changes or someone finds a new way