jtblin / kube2iam

kube2iam provides different AWS IAM roles for pods running on Kubernetes
BSD 3-Clause "New" or "Revised" License
1.96k stars 318 forks source link

kube2iam doesn't restrict IAM access in non-default namespace #329

Open tomekit opened 2 years ago

tomekit commented 2 years ago

I am running application pods in default namespace. If I try to fetch the credentials: curl http://169.254.169.254/latest/meta-data/iam/security-credentials, I get correctly: unable to find role for IP 100.98.143.218

I've recently created new namespace for ingress controller:

apiVersion: v1
kind: Namespace
metadata:
  name: ingress-controller

In that namespace there is a deployment without iam.amazonaws.com/role annotation. When I try to fetch credentials from within the pod of that deployment, I get full node credentials: nodes.v2.k8s.local/

I would assume that by default kube2iam restricts IAM access and only allow one, once specified, e.g. as mentioned here: https://github.com/jtblin/kube2iam#namespace-restrictions

Is my assumption correct or perhaps there is a bug in the kube2iam or perhaps there is misconfiguration on my end?

I would appreciate any help on this topic.

Thanks.

kubectl version --short
Client Version: v1.22.2
Server Version: v1.22.2

kops version
Version 1.22.1 (git-da710c931fbfe0fc8fe7bb3e2d41ba6e09c899f9)

kube2iam.yaml

---
apiVersion: v1
kind: ServiceAccount
metadata:
  name: kube2iam
  namespace: kube-system
  labels:
    app: kube2iam

---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
  name: kube2iam
  labels:
    app: kube2iam
rules:
  - apiGroups:
      - ""
    resources:
      - namespaces
      - pods
    verbs:
      - list
      - watch

---
apiVersion: v1
items:
  - apiVersion: rbac.authorization.k8s.io/v1
    kind: ClusterRole
    metadata:
      name: kube2iam
    rules:
      - apiGroups: [""]
        resources: ["namespaces","pods"]
        verbs: ["get","watch","list"]
  - apiVersion: rbac.authorization.k8s.io/v1
    kind: ClusterRoleBinding
    metadata:
      name: kube2iam
    subjects:
      - kind: ServiceAccount
        name: kube2iam
        namespace: kube-system
    roleRef:
      kind: ClusterRole
      name: kube2iam
      apiGroup: rbac.authorization.k8s.io
kind: List

---
apiVersion: apps/v1
kind: DaemonSet
metadata:
  name: kube2iam
  namespace: kube-system
  labels:
    app: kube2iam
spec:
  selector:
    matchLabels:
      name: kube2iam
  template:
    metadata:
      labels:
        name: kube2iam
    spec:
      serviceAccountName: kube2iam
      hostNetwork: true
      containers:
        - image: jtblin/kube2iam:kube2iam-2.6.0
          imagePullPolicy: Always
          name: kube2iam
          args:
            - "--app-port=8181"
            - "--iptables=true"
            - "--host-ip=$(HOST_IP)"
            - "--host-interface=cali+"
            - "--verbose"
          env:
            - name: HOST_IP
              valueFrom:
                fieldRef:
                  fieldPath: status.podIP
            - name: NO_PROXY
              value: "127.0.0.1,localhost,100.64.0.1,100.64.0.0/10,169.254.169.254"
            - name: http_proxy
              value: "{{ outbound_proxy }}"
          ports:
            - containerPort: 8181
              hostPort: 8181
              name: http
          securityContext:
            privileged: true
callum-p commented 2 years ago

@tomekit doesn't look like your DS is using the flag --namespace-restrictions as per the docs @ https://github.com/jtblin/kube2iam#namespace-restrictions.