EdOverflow / can-i-take-over-xyz

"Can I take over XYZ?" — a list of services and how to claim (sub)domains with dangling DNS records.
Creative Commons Attribution 4.0 International
4.77k stars 708 forks source link

can I takeover S3 bucket? #361

Closed C0oki3s closed 1 year ago

C0oki3s commented 1 year ago

Service name

image

1) I have a subdomain xyz.domain.com where (Hosted/pointed) to S3 bucket but as you can see from above image that the bucket error AccessDenied 2) so I tried checking is Bucket is already taken or not using aws-cli

image

3) seeing this I claimed S3 bucket and hosted publicly still the xyz.domain.com gives the same exact error message as (Image1) 4) I did checked nslookup which it was pointing to fastly

image

5) I tried claiming xyz.domain.com but it was already clamied(Domain 'domain.com' is owned by another customer)

what I Have tried? 1) I have Adding xyz.domain.com to Route53 and I was successful with it but it still shows the error from image[1] 2) Updated Both CNAME and A record with appending www[1].xyz.domain.com

knowthetech commented 1 year ago

how can you takeover already claimed bucket?

C0oki3s commented 1 year ago

I claimed the bucket not someone else @knowthetech

GDATTACKER-RESEARCHER commented 1 year ago

I claimed the bucket not someone else @knowthetech

If you have claimed that's simple what's the issue simply add bucket policy and enable static hosting all done why messing up with other services😁

C0oki3s commented 1 year ago

Please read the comment again even if I add bucket that doesn't do anything on xyz.domain.com still it was showing same error I enabled static hosting too, if you clearly read the comment I stated the cname is pointing to fastly but not S3 but there server header is showing AmazonS3. This is edge case and I want to know what's happening if you guys know please let me know I think there is misconfiguration in DNS server

GDATTACKER-RESEARCHER commented 1 year ago

If you add the bucket the bucket doesn't exist error will change to access denied in case of xml format takeover of s3 bucket than you change it to static hosting but you missed also adding bucket policy i get. That doesn't matter if they have fasty or cloudflare or edge in case of cdn.

C0oki3s commented 1 year ago

@GDATTACKER-RESEARCHER can you please elaborate.

{ "Version": "2012-10-17", "Id": "Policy167899115xxxx", "Statement": [ { "Sid": "Stmt167899114xxxx", "Effect": "Allow", "Principal": "", "Action": "s3:GetObject", "Resource": "arn:aws:s3:::xyz.domain.com/" } ] }

Here is my policy, and I don't know the term XML format takeover, what I'm assuming is if the someone already did XML format takeover then it shows 403 error but they didn't update the policy to S3 bucket. But still I'm confused how cloud I takeover S3 bucket.

knowthetech commented 1 year ago

{ "Version": "2012-10-17", "Statement": [ { "Sid": "PublicReadGetObject", "Effect": "Allow", "Principal": { "AWS": "" }, "Action": "s3:GetObject", "Resource": "arn:aws:s3:::xyz.domain.com/" } ] }

that guy mean there are 2 type of takeover errors in s3 takeover one shows error in xml format where as other is normal white page with error metioned try this policy it will work and try visiting path to your uploaded file than.

knowthetech commented 1 year ago

but there need to be a asterisk after xyz.domain.com/

C0oki3s commented 1 year ago

The policy is indeed correct and I'm able to access my html file at http://xyz.domain.com.s3-website-us-east-1.amazonaws.com 

C0oki3s commented 1 year ago

Please tell me guys if

GDATTACKER-RESEARCHER commented 1 year ago

The policy is indeed correct and I'm able to access my html file at http://xyz.domain.com.s3-website-us-east-1.amazonaws.com 

That's not a big issue organization will accept it still it happens sometimes.

C0oki3s commented 1 year ago

@GDATTACKER-RESEARCHER it's not about accepting the report or not I just want to know what could be the developer did the mistake to show such response. And btw the report got duplicated in H1 but still want to know!!

C0oki3s commented 1 year ago

@GDATTACKER-RESEARCHER, @knowthetech, @EdOverflow, @codingo Hey there, I was able to figure out what's going on in this scenario thanks to Green-jam, who provided me with a lead from which I was capable of replicating this scenario. Here are my Findings, 1) firstly, I have an S3 bucket abc.rhack.tech.s3-website-us-east-1.amazonaws.com that is public and has static hosting enabled in this situation, but the above it is hosted in an unknown bucket that we cannot discover, why? I'll explain further below. 2) Dev is achieving this by making his S3 data buckets available via Fastly. 2.1) Adding his subdomain xyz.rhack.tech to fastly, [where in the above scenario its xyz.domain.com] 2.2) then as for host he is pointing it to S3 bucket [abc.rhack.tech.s3-website-us-east-1.amazonaws.com], [where in the above scenario its Unknown]

image image

3) lastly, Adding CNAME to DNS

3.1) I have added my fastly-dns [xyz.rhack.tech.global.prod.fastly.net] to target field 
 which will obfuscate S3 bucket from dig, nslookup, reverse lookup, etc.
image

So, the above scenario is a false positive and cannot able to perform S3 bucket takeover

Green-jam : fastly is connected to an S3 bucket that you don't have the details of. The S3 bucket you have claimed may just effectively be a random S3 bucket that matches the subdomain name. The actual S3 bucket connected to by fastly that you do not know the name of, plus it already looks claimed anyway hence the 403 from s3.

regards, @C0oki3s