DataDog / managed-kubernetes-auditing-toolkit

All-in-one auditing toolkit for identifying common security issues in managed Kubernetes environments. Currently supports Amazon EKS.
Apache License 2.0
318 stars 18 forks source link

"mkat test-imds-access" returns an invalid output when the node enforces IMDSv2 #8

Closed vchaplygim closed 1 year ago

vchaplygim commented 1 year ago
  1. I tried the AWS Instance Metadata Service (IMDS) test. It shows that there is accessible mkat eks test-imds-access 2023/07/11 15:34:32 Testing if IMDS is accessible to pods by creating a pod that attempts to access it 2023/07/11 15:34:55 IMDS is accessible and allows any pod to retrieve credentials for the AWS role

  2. But if you try to pull the metadata (https://blog.christophetd.fr/privilege-escalation-in-aws-elastic-kubernetes-service-eks-by-compromising-the-instance-role-of-worker-nodes/), it says that 401 Unathorized. # curl -o - -I http://169.254.169.254/latest/meta-data/iam/info HTTP/1.1 401 Unauthorized Content-Length: 0 Date: Tue, 11 Jul 2023 12:41:52 GMT Server: EC2ws Connection: close Content-Type: text/plain

  3. So most likely because AWS has a patched version of IMDSv2 in which you can get metadata only by token (link1 and link2 proofs).

What could be the reason for this behavior? Perhaps it is worth finding the cause and fixing it?

christophetd commented 1 year ago

Thanks for reporting! I have a fix that should work here, can you test it and confirm?

https://github.com/DataDog/managed-kubernetes-auditing-toolkit/pull/9

vchaplygim commented 1 year ago

Looks like a success fix, thank you

Example on first cluster: 2023/07/12 12:31:13 Testing if IMDSv1 and IMDSv2 are accessible from pods by creating a pod that attempts to access it 2023/07/12 12:31:26 IMDSv2 is not accessible to pods in your cluster: unable to establish a network connection to the IMDS 2023/07/12 12:31:29 IMDSv1 is not accessible to pods in your cluster: able to establish a network connection to the IMDS, but no credentials were returned

Example on second cluster: 2023/07/12 12:29:46 Testing if IMDSv1 and IMDSv2 are accessible from pods by creating a pod that attempts to access it 2023/07/12 12:29:55 IMDSv1 is not accessible to pods in your cluster: able to establish a network connection to the IMDS, but no credentials were returned 2023/07/12 12:29:55 IMDSv2 is accessible: any pod can retrieve credentials for the AWS role example-persistent-node-00000000000

May be:

  1. Add to the output that you need authorized to get data from IMDSv2.
  2. Add to the repository in the README information on how to get information from IMDSv2 Like this (i tried push PR (update README) in repo but don't have access rights)

If you see "IMDSv2 is accessible: any pod can retrieve credentials for the AWS role", it may be can check next command (more information about instancedata data retrieval here):

  1. Create temporary pod with curl bash $ kubectl run --rm -i --tty imdspod --image=alpine/curl --restart=Never -- sh

  2. Create temporary token for auth IMDSv2 bash $ TOKEN=curl -X PUT "http://169.254.169.254/latest/api/token" -H "X-aws-ec2-metadata-token-ttl-seconds: 21600"

  3. Get the top-level metadata items bash $ curl -H "X-aws-ec2-metadata-token: $TOKEN" -v http://169.254.169.254/latest/meta-data/

  4. Retrieve temporary credentials for the IAM role eksctl-mkat-cluster-nodegroup-ng-NodeInstanceRole-AXWUFF35602Z bash curl http://169.254.169.254/latest/meta-data/iam/security-credentials/eksctl-mkat-cluster-nodegroup-ng-NodeInstanceRole-AXWUFF35602Z