cyberark / KubiScan

A tool to scan Kubernetes cluster for risky permissions
GNU General Public License v3.0
1.31k stars 130 forks source link

Listing secret not capturing as a risky rule #10

Closed prasenforu closed 3 years ago

prasenforu commented 5 years ago

My RBAC (ServiceAccount,Role & RoleBinding) as follows, which has a role of listing secrets.

apiVersion: v1
kind: ServiceAccount
metadata:
  name: listsecrets
---
kind: Role
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: role-list-secrets
rules:
- apiGroups: ["*"]
  resources: ["secrets"]
  verbs: ["list"]
---
kind: RoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: rolebinding-list-secrets
subjects:
- kind: ServiceAccount
  name: listsecrets
  namespace: testing
roleRef:
  kind: Role
  name: role-list-secrets
  apiGroup: rbac.authorization.k8s.io

image

But kubiscan -rr does not capturing/show as a risky rule.

image

Not sure what is the criteria of risky rule?

g3rzi commented 5 years ago

Interesting, I will try to restore it on my machine and update you. It is weird because it suppose to find it.

But I have two thoughts:

  1. Maybe it is related to the context. When you run "kubectl get roles", can you see the role you created ?
  2. You didn't specify namespaces in the yaml file, so I suppose it will be automatically assigned to the default namespace but maybe kubiscan doesn't see the namespace and can't find it and do the correct compare.
prasenforu commented 5 years ago

I am using openshift, where I can set my namespace (oc project <namespace>)

Q. 2 -Ans. so no need to specify namespace, its will take the namespace automatically. But it will NOT take default namespace.

Q. 1-Ans.

[root@ocpdns ocpscan]# oc get roles -n testing
NAME
role-list-secrets
prasenforu commented 5 years ago

As because its not capturing as a risky rule, associated serviceaccount mapped with POD also not showing as a risky pod

apiVersion: v1
kind: Pod
metadata:
  name: pod-wo-sa
spec:
  automountServiceAccountToken: false
  serviceAccount: listsecrets
  containers:
    - name: pod-wo-sa
      image: bkimminich/juice-shop
      ports:
        - containerPort: 3000

image

g3rzi commented 5 years ago

I understand. I didn't tested kubiscan on OpenShit, so this is why I didn't encounter it. Can you show me the list of contexts you have in OpenShit? Using "oc config get-contexts" ? I want to see what is the current context being used and the admin contexts.

prasenforu commented 5 years ago
CURRENT   NAME                                          CLUSTER            AUTHINFO                               NAMESPACE
          default/10-138-0-16:8443/admin                10-138-0-16:8443   admin/10-138-0-16:8443                 default
          default/cluster-1/admin                       cluster-1          admin/cluster-1                        default
          event-controller/10-138-0-16:8443/admin       10-138-0-16:8443   admin/10-138-0-16:8443                 event-controller
          heptio-ark/10-138-0-16:8443/admin             10-138-0-16:8443   admin/10-138-0-16:8443                 heptio-ark
          heptio-ark/10-138-0-17:8443/admin             10-138-0-17:8443   admin/10-138-0-17:8443                 heptio-ark
          kubewatch-kubewatch-10-138-0-16:8443          10-138-0-16:8443   kubewatch-kubewatch-10-138-0-16:8443   kubewatch
          kubewatch/10-138-0-16:8443/admin              10-138-0-16:8443   admin/10-138-0-16:8443                 kubewatch
          logging/10-138-0-16:8443/admin                10-138-0-16:8443   admin/10-138-0-16:8443                 logging
          loki/10-138-0-16:8443/admin                   10-138-0-16:8443   admin/10-138-0-16:8443                 loki
          loki/10-138-0-17:8443/admin                   10-138-0-17:8443   admin/10-138-0-17:8443                 loki
          ocp-view/10-138-0-16:8443/admin               10-138-0-16:8443   admin/10-138-0-16:8443                 ocp-view
          ocp-view/10-138-0-17:8443/admin               10-138-0-17:8443   admin/10-138-0-17:8443                 ocp-view
          ocpwatch/10-138-0-16:8443/admin               10-138-0-16:8443   admin/10-138-0-16:8443                 ocpwatch
          openshift-logging/10-138-0-16:8443/admin      10-138-0-16:8443   admin/10-138-0-16:8443                 openshift-logging
          openshift-monitoring/10-138-0-16:8443/admin   10-138-0-16:8443   admin/10-138-0-16:8443                 openshift-monitoring
          sample-app/10-138-0-16:8443/admin             10-138-0-16:8443   admin/10-138-0-16:8443                 sample-app
          sample-app/10-138-0-16:8443/pkar              10-138-0-16:8443   pkar/10-138-0-16:8443                  sample-app
          sample-app/10-138-0-17:8443/admin             10-138-0-17:8443   admin/10-138-0-17:8443                 sample-app
*         security/10-138-0-16:8443/admin               10-138-0-16:8443   admin/10-138-0-16:8443                 security
          security/10-138-0-16:8443/pkar                10-138-0-16:8443   pkar/10-138-0-16:8443                  security
          testing/10-138-0-16:8443/admin                10-138-0-16:8443   admin/10-138-0-16:8443                 testing
          testing/10-138-0-16:8443/pkar                 10-138-0-16:8443   pkar/10-138-0-16:8443                  testing
prasenforu commented 5 years ago

Please check the code , looks like its working with clusterrole

prasenforu commented 5 years ago

Any update?

g3rzi commented 5 years ago

Please check now. I noticed for an issue with the indents in the risky YAML file which didn't load the one of the risky roles.

g3rzi commented 5 years ago

By the way, can you add me on twitter @g3rzi ? I would like to advise with you on other stuff related to Kubernetes.

prasenforu commented 5 years ago

OK, will add.

But I don't think issue was in your yaml, u fixed in your code.

Now with -rr its showing roles but its NOT capturing with -rp (risky pod)

g3rzi commented 5 years ago

There was an issue with the YAML which provides the roles you want it to capture. There was a wrong indent with one of the roles after the list secrets role which ignored the list secrets in the YAML when I load it.

I will try to restore the -rp and then fix it.

prasenforu commented 5 years ago

If you saw my RBAC yaml top of discussion there was no issues.

True that your yaml was wrong.

g3rzi commented 5 years ago

I didn't speak about your RBAC yaml. I spoke about the risky_roles.yaml that the KubiScan use to check for risky permissions. There was an indent problem with it.

prasenforu commented 5 years ago

Oh! Sorry.

I am very sorry.

g3rzi commented 5 years ago

Its OK, don't worry :)

prasenforu commented 5 years ago

Let me know once it fix so I will close, it's a long discussion. :)

g3rzi commented 5 years ago

I tried to reproduce it with this YAML:

kubectl apply -f - <<EOF
apiVersion: v1
kind: Namespace
metadata:
  name: testing
---
apiVersion: v1
kind: ServiceAccount
metadata:
  name: listsecrets
  namespace: testing
---
kind: Role
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: role-list-secrets
rules:
- apiGroups: ["*"]
  resources: ["secrets"]
  verbs: ["list"]
---
kind: RoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: rolebinding-list-secrets
subjects:
- kind: ServiceAccount
  name: listsecrets
  namespace: testing
roleRef:
  kind: Role
  name: role-list-secrets
  apiGroup: rbac.authorization.k8s.io
---
apiVersion: v1
kind: Pod
metadata:
  name: alpine3
  namespace: testing
spec:
  containers:
  - name: alpine3
    image: alpine3
    command: ["sleep 99d"]
  serviceAccountName: listsecrets
EOF

Notice that I have created a servuce account user named: listsecrets in the testing namespace. I mounted this use to a pod inside the testing namespace. With this YAML I can find the risky pod:
image

When I used your YAML, I noticed that you didn't use the testing namespace:

apiVersion: v1
kind: ServiceAccount
metadata:
  name: listsecrets

For trobuleshooting, check with -rs that the use listsecrets exist in the risky subjects.

prasenforu commented 5 years ago

I am using openshift, where I can set my namespace (oc project namespace)

So no need to specify namespace, its will take the namespace automatically. But it will NOT take default namespace.

prasenforu commented 5 years ago

Risky roles (-rr) shows correct output ...

But Risky Subject (-rs) & Risky POD (-rp) does not capture ..

image

But I have a another POD (ocpscan-dc-1-7rwng) running under security namespace which using clusterrole NOT role that pod is capturing by Risky Subject (-rs) & Risky POD (-rp).

image

g3rzi commented 4 years ago

@prasenforu , sorry for the late response, I wasn't able to reproduce it so I need your help here. I want to you to edit the file utils.py inside the engine folder. Replace the function get_all_risky_subjects() (rows 216 - 227): https://github.com/cyberark/KubiScan/blob/35d6c0418d9e9587fec91eff59c31e6fb8466dfd/engine/utils.py#L216-L227

With this:

def get_all_risky_subjects():
    all_risky_users = []
    all_risky_rolebindings = get_all_risky_rolebinding()
    passed_users = {}
    for risky_rolebinding in all_risky_rolebindings:
        print('{0}:{1}'.format(risky_rolebinding.namespace, risky_rolebinding.name))
        for user in risky_rolebinding.subjects:
            print('\t{0}:{1}'.format(user.namespace, user.name))
            # Removing duplicated users
            if ''.join((user.kind, user.name, str(user.namespace))) not in passed_users:
                passed_users[''.join((user.kind, user.name, str(user.namespace)))] = True
                all_risky_users.append(Subject(user, risky_rolebinding.priority))

    return all_risky_users

I added two printings. I want you to run scan -rs and send me the output (including the new printings). This is will help us to see if the role-list-secrets is included in this function.

g3rzi commented 4 years ago

@prasenforu I found the bug with the namespace on -rs and -rp. If you have RoleBinding with service account without namespace, Kubernetes refer the service account as it has the namspace of the RoleBinding. I added support to this scenario and it worked for me so I think it will solve the problem you had.

prasenforu commented 4 years ago

Thanks 👍 Will check from my side.

g3rzi commented 4 years ago

@prasenforu did you have time to check it?

prasenforu commented 4 years ago

No, man. Did not get a chance to check bcoz of Covid.

Stay safe & take care.

g3rzi commented 3 years ago

Hi @prasenforu How are you? I hope everything is okay on your side. Did you have some time to look on this issue?

cloudcafetech commented 3 years ago

Doing good, thanks.

Sorry didn't get a chance to look.

Will do end of coming week.

cloudcafetech commented 3 years ago

Checked in old openshift version, looks OK, need to test in new openshift version (unfortunately I do not have any environment).

Will update you if I replicate in new Openshift version, expected it will work :)

Anyway thanks for notification.

g3rzi commented 3 years ago

Great to hear :) I will close it for now and if you will encounter it again you can open a new ticket or re-open this one. Thanks for your update