Open PatrikHudak opened 5 years ago
Hello everyone,
I can confirm this takeover is still possible, adding some details:
- If you get an error as 'the bucket .... already exists' --> it's not vulnerable.
- A CNAME pointing to a AWS domain name is not necessary. I took a bucket that was pointing to several IP addresses. The relevant part is the response fingerprint.
- The error with
Code: IncorrectEndpoint
can be fixed removing and creating the bucket in another region. It takes around 1 hour for the bucket to be removed, before that you won't be able to create it. Use AWS Cli to automate this part.- If you are getting
Access denied
errors, check this guide
i try to take subdomain from s3 bucket, if i try access subdomain ex : sub.domain.com always return error 403. but if i access sub.domain.com/index.html it can be open normally. whats the problem?
@radiustama77 AWS has very granular permission controls. Opening sub.domain.com/ needs s3:ListBucket permission which you don't have. However, you do have permission to s3:GetObject so if you can guess the name of the file, you will be able to get it. Based on the behavior you described, subdomain takeover is not possible. Also, it seems that bucket files are intended to be public based on 'index.html' filename implications. You may try brute-forcing for filenames and see if you get something sensitive (with gobuster for example).
@radiustama77 AWS has very granular permission controls. Opening sub.domain.com/ needs s3:ListBucket permission which you don't have. However, you do have permission to s3:GetObject so if you can guess the name of the file, you will be able to get it. Based on the behavior you described, subdomain takeover is not possible. Also, it seems that bucket files are intended to be public based on 'index.html' filename implications. You may try brute-forcing for filenames and see if you get something sensitive (with gobuster for example).
what i mean is, i already can takeover several subdomain. but for some subdomain, whenever i try to access with subdomain.example.com it return to error 403 access denied. But if i access it with subdomain.example.com/index.html it works and normal.
@radiustama77 AWS has very granular permission controls. Opening sub.domain.com/ needs s3:ListBucket permission which you don't have. However, you do have permission to s3:GetObject so if you can guess the name of the file, you will be able to get it. Based on the behavior you described, subdomain takeover is not possible. Also, it seems that bucket files are intended to be public based on 'index.html' filename implications. You may try brute-forcing for filenames and see if you get something sensitive (with gobuster for example).
what i mean is, i already can takeover several subdomain. but for some subdomain, whenever i try to access with subdomain.example.com it return to error 403 access denied. But if i access it with subdomain.example.com/index.html it works and normal.
That's because you have not specified index files in static hosting which you need to for index page. Else it keep coming up with error 403
@GDATTACKER-RESEARCHER I already specified the index file in static hosting. the url from s3 amazon work properly too like subdomain.s3-website-us-east-1.amazonaws.com but error still happened when i try to access via subdomain.example.com
It means that the ,bucket is not available for takeovers.
It means that the ,bucket is not available for takeovers.
Yes it is not possible to claim this one as it's already in use just the permissions for static hosting has been disabled
id 64053 opcode QUERY rcode NOERROR flags QR RD RA ;QUESTION girishsarwal.me. IN CNAME ;ANSWER ;AUTHORITY something.me. 899 IN SOA ns-732.awsdns-27.net. awsdns-hostmaster.amazon.com. 1 7200 900 1209600 86400 ;ADDITIONAL in s3 bucket, i'm facing this problem. What's solution for this ?
@soynek did you ever find a solution to this? If so, how did you fix it?
In your case us-west-2 is region
how to know the region?
id 64053 opcode QUERY rcode NOERROR flags QR RD RA ;QUESTION girishsarwal.me. IN CNAME ;ANSWER ;AUTHORITY something.me. 899 IN SOA ns-732.awsdns-27.net. awsdns-hostmaster.amazon.com. 1 7200 900 1209600 86400 ;ADDITIONAL in s3 bucket, i'm facing this problem. What's solution for this ?
@soynek did you ever find a solution to this? If so, how did you fix it?
In your case us-west-2 is region
how to know the region?
simply change the region to us-west-2 in your case for domain girishsarwal.me
girishsarwal.me
yeah i mean how to know the region of the domain?
id 64053 opcode QUERY rcode NOERROR flags QR RD RA ;QUESTION girishsarwal.me. IN CNAME ;ANSWER ;AUTHORITY something.me. 899 IN SOA ns-732.awsdns-27.net. awsdns-hostmaster.amazon.com. 1 7200 900 1209600 86400 ;ADDITIONAL in s3 bucket, i'm facing this problem. What's solution for this ?
@soynek did you ever find a solution to this? If so, how did you fix it?
In your case us-west-2 is region
how to know the region?
simply change the region to us-west-2 in your case for domain girishsarwal.me
like in this how i can get the correct region to create a bucket with this domains
girishsarwal.me
yeah i mean how to know the region of the domain?
simply try common methods if not possible by that you need to change regions after every 2 hours until you get right one
girishsarwal.me
yeah i mean how to know the region of the domain?
simply try common methods if not possible by that you need to change regions after every 2 hours until you get right one
what the common methods? to get the region
girishsarwal.me
yeah i mean how to know the region of the domain?
simply try common methods if not possible by that you need to change regions after every 2 hours until you get right one
what the common methods? to get the region
simply you can also reffer ip history to find the exact ip range matching your vulnerable domain ip https://ip-ranges.amazonaws.com/ip-ranges.json
girishsarwal.me
yeah i mean how to know the region of the domain?
simply try common methods if not possible by that you need to change regions after every 2 hours until you get right one
what the common methods? to get the region
simply you can also reffer ip history to find the exact ip range matching your vulnerable domain ip https://ip-ranges.amazonaws.com/ip-ranges.json
i check the ip for my site with the ping , and then use method like you do to check ip ranges in the amazon prefix but didnt found how i can get the region ? if the ip not avalaibe in that data you send
Using other bucket used by websites's default location, using the ip ranges of bucket, use aws-cli to know region etc
girishsarwal.me
yeah i mean how to know the region of the domain?
simply try common methods if not possible by that you need to change regions after every 2 hours until you get right one
what the common methods? to get the region
simply you can also reffer ip history to find the exact ip range matching your vulnerable domain ip https://ip-ranges.amazonaws.com/ip-ranges.json
i check the ip for my site with the ping , and then use method like you do to check ip ranges in the amazon prefix but didnt found how i can get the region ? if the ip not avalaibe in that data you send
Ip range is available if you know networking you should know easily your ip range is mentioned there.
girishsarwal.me
yeah i mean how to know the region of the domain?
simply try common methods if not possible by that you need to change regions after every 2 hours until you get right one
what the common methods? to get the region
simply you can also reffer ip history to find the exact ip range matching your vulnerable domain ip https://ip-ranges.amazonaws.com/ip-ranges.json
i check the ip for my site with the ping , and then use method like you do to check ip ranges in the amazon prefix but didnt found how i can get the region ? if the ip not avalaibe in that data you send
Ip range is available if you know networking you should know easily your ip range is mentioned there.
example this endpass.com this i lookup ip and got 104.21.37.171 after that i check in iprange prefix amazon but still didnt find , can you give advice?
why you need script for it when you can do manually.
Hi guys, is this still vulnerable? I get an error that the bucket name is already taken.🤔
Hi guys I found the following scenario:
subdomain.example.com returning NoSuchBucket
dig cname subdomain.example.com
returns:
> dig cname subdomain.example.com
; <<>> DiG 9.18.12-0ubuntu0.22.04.3-Ubuntu <<>> cname subdomain.example.com
;; global options: +cmd
;; Got answer:
;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 43658
;; flags: qr rd ra; QUERY: 1, ANSWER: 1, AUTHORITY: 0, ADDITIONAL: 1
;; OPT PSEUDOSECTION:
; EDNS: version: 0, flags:; udp: 65494
;; QUESTION SECTION:
;subdomain.example.com. IN CNAME
;; ANSWER SECTION:
subdomain.example.com. 3600 IN CNAME RANDOM_NAME_SEQUENCE.s3.amazonaws.com.
;; Query time: 31 msec
;; SERVER: 127.0.0.53#53(127.0.0.53) (UDP)
;; WHEN: Thu Nov 02 10:55:24 CET 2023
;; MSG SIZE rcvd: 131
Checked bucket region by curl -sI RANDOM_NAME_SEQUENCE.s3.amazonaws.com | grep bucket-region
Claimed and created an S3 bucket with the name RANDOM_NAME_SEQUENCE.s3.amazonaws.com
on the region from the previous step and uploaded a poc to RANDOM_NAME_SEQUENCE.s3.amazonaws.com/poc
, made it public, both the bucket and the poc file.
Navigated to https://RANDOM_NAME_SEQUENCE.s3.amazonaws.com/poc
and the file shows properly.
subdomain.example.com/poc
still shows NoSuchBucket.
Also tried the to create the bucket as static website hosting. Does anyone found this scenario or know what's happening here?
@six2dez please refer to this issue https://github.com/EdOverflow/can-i-take-over-xyz/issues/361 I have faced similar kind of scenario hope it will be useful
Bucket with the same name already exists
Is this edge case now?
Bucket with the same name already exists
Is this edge case now?
No
id 64053 opcode QUERY rcode NOERROR flags QR RD RA ;QUESTION girishsarwal.me. IN CNAME ;ANSWER ;AUTHORITY something.me. 899 IN SOA ns-732.awsdns-27.net. awsdns-hostmaster.amazon.com. 1 7200 900 1209600 86400 ;ADDITIONAL in s3 bucket, i'm facing this problem. What's solution for this ?
@soynek did you ever find a solution to this? If so, how did you fix it?
Bucket region mismatch change region
@GDATTACKER-RESEARCHER how can you find out which one you need to change to out of the 22 options?
Different ways depend on case by case bases by ping, other buckets in use by site, cname etc
Could you explainin a little more detail. Iam facing the same problem
While working on a bug bounty, I found that a subdomain was vulnerable to subdomain takeover via an AWS S3 bucket. I created a bucket with the same name and uploaded an HTML file to take over the subdomain. However, when I visited the domain after creating the bucket, I encountered the following error:
400 Bad Request
Code: IncorrectEndpoint
Message: The specified bucket exists in another region. Please direct requests to the specified endpoint.
Endpoint: [bite-lt.pms-ou.aon.com.s3-website-us-west-2.amazonaws.com](http://bite-lt.pms-ou.aon.com.s3-website-us-west-2.amazonaws.com/)
RequestId: WAD8676JGAR3HYMJ
HostId: mQPpVkRu9vHxhHiWKBoZu/9/c9RG5EXzr+eLtWB29RiRFQzMZ4ib6hl0mhcIa31IwD+Wj7EFims=
The error indicates that us-west-2
is the incorrect endpoint, meaning I created the bucket in the wrong region. To identify the correct region, I used nslookup
and dig
, which provided me with the following IPs:
104.18.38.14
172.64.149.242
Could you please guide me on how to determine the correct AWS region to create the bucket in order to successfully take over the domain?
Service name
Amazon (AWS) S3
Proof
Amazon S3 service is indeed vulnerable. Amazon S3 follows pretty much the same concept of virtual hosting as other cloud providers. S3 buckets might be configured as website hosting to serve static content as web servers. If the canonical domain name has website in it, the S3 bucket is specified as Website hosting. I suspect that non-website and website configured buckets are handled by separate load balancers, and therefore they don't work with each other. The only difference will be in the bucket creation where correct website flag needs to be set if necessary. Step-by-step process:
To verify the domain, I run:
Note that there are two possible error pages depending on the bucket settings (set as website hosting or not).
Some reports on H1, claiming S3 buckets:
Documentation
There are several formats of domains that Amazon uses for S3 (RegExp):
^[a-z0-9\.\-]{0,63}\.?s3.amazonaws\.com$
^[a-z0-9\.\-]{0,63}\.?s3-website[\.-](eu|ap|us|ca|sa|cn)-\w{2,14}-\d{1,2}\.amazonaws.com(\.cn)?$
^[a-z0-9\.\-]{0,63}\.?s3[\.-](eu|ap|us|ca|sa)-\w{2,14}-\d{1,2}\.amazonaws.com$
^[a-z0-9\.\-]{0,63}\.?s3.dualstack\.(eu|ap|us|ca|sa)-\w{2,14}-\d{1,2}\.amazonaws.com$
Note that there are cases where only raw domain (e.g. s3.amazon.com) is included in CNAME and takeover is still possible.
(Documentation taken from https://0xpatrik.com/takeover-proofs/)