aquasecurity / kube-hunter

Hunt for security weaknesses in Kubernetes clusters
Apache License 2.0
4.66k stars 578 forks source link

kube-hunter fails to list node using list_all_k8s_cluster_nodes of module kubernetes_client.py #467

Closed RSE132 closed 2 years ago

RSE132 commented 3 years ago

kube-hunter fails to list node using list_all_k8s_cluster_nodes of module kubernetes_client.py

I tried to create a JOB using the jobs.yaml file in the repo. It does create a job and spins up a pod, while looking into the POD logs I could see that it reports error while listing nodes -

2021-07-08 12:10:16,074 INFO kube_hunter.modules.report.collector Started hunting 2021-07-08 12:10:16,074 INFO kube_hunter.modules.report.collector Discovering Open Kubernetes Services 2021-07-08 12:10:16,088 INFO kube_hunter.modules.discovery.kubernetes_client Attempting to use in cluster Kubernetes config 2021-07-08 12:10:16,089 INFO kube_hunter.modules.report.collector Found vulnerability "Read access to pod's service account token" in Local to Pod (kube-hunter-9zvbk) 2021-07-08 12:10:16,089 INFO kube_hunter.modules.report.collector Found vulnerability "Access to pod's secrets" in Local to Pod (kube-hunter-9zvbk) 2021-07-08 12:10:16,090 INFO kube_hunter.modules.report.collector Found vulnerability "CAP_NET_RAW Enabled" in Local to Pod (kube-hunter-9zvbk) 2021-07-08 12:10:16,157 ERROR kube_hunter.modules.discovery.kubernetes_client Failed to list nodes from Kubernetes Traceback (most recent call last): File "/usr/local/lib/python3.8/site-packages/kube_hunter/modules/discovery/kubernetes_client.py", line 21, in list_all_k8s_cluster_nodes ret = client.list_node(watch=False) File "/usr/local/lib/python3.8/site-packages/kubernetes/client/api/core_v1_api.py", line 16414, in list_node return self.list_node_with_http_info(kwargs) # noqa: E501 File "/usr/local/lib/python3.8/site-packages/kubernetes/client/api/core_v1_api.py", line 16517, in list_node_with_http_info return self.api_client.call_api( File "/usr/local/lib/python3.8/site-packages/kubernetes/client/api_client.py", line 348, in call_api return self.call_api(resource_path, method, File "/usr/local/lib/python3.8/site-packages/kubernetes/client/api_client.py", line 192, in __call_api return_data = self.deserialize(response_data, response_type) File "/usr/local/lib/python3.8/site-packages/kubernetes/client/api_client.py", line 264, in deserialize return self.deserialize(data, response_type) File "/usr/local/lib/python3.8/site-packages/kubernetes/client/api_client.py", line 303, in deserialize return self.deserialize_model(data, klass) File "/usr/local/lib/python3.8/site-packages/kubernetes/client/api_client.py", line 639, in deserialize_model kwargs[attr] = self.deserialize(value, attr_type) File "/usr/local/lib/python3.8/site-packages/kubernetes/client/api_client.py", line 280, in deserialize return [self.__deserialize(sub_data, sub_kls) File "/usr/local/lib/python3.8/site-packages/kubernetes/client/api_client.py", line 280, in return [self.deserialize(sub_data, sub_kls) File "/usr/local/lib/python3.8/site-packages/kubernetes/client/api_client.py", line 303, in deserialize return self.deserialize_model(data, klass) File "/usr/local/lib/python3.8/site-packages/kubernetes/client/api_client.py", line 639, in deserialize_model kwargs[attr] = self.deserialize(value, attr_type) File "/usr/local/lib/python3.8/site-packages/kubernetes/client/api_client.py", line 303, in deserialize return self.deserialize_model(data, klass) File "/usr/local/lib/python3.8/site-packages/kubernetes/client/api_client.py", line 639, in deserialize_model kwargs[attr] = self.deserialize(value, attr_type) File "/usr/local/lib/python3.8/site-packages/kubernetes/client/api_client.py", line 280, in deserialize return [self.deserialize(sub_data, sub_kls) File "/usr/local/lib/python3.8/site-packages/kubernetes/client/api_client.py", line 280, in return [self.__deserialize(sub_data, sub_kls) File "/usr/local/lib/python3.8/site-packages/kubernetes/client/api_client.py", line 303, in deserialize return self.deserialize_model(data, klass) File "/usr/local/lib/python3.8/site-packages/kubernetes/client/api_client.py", line 641, in __deserialize_model instance = klass(kwargs) File "/usr/local/lib/python3.8/site-packages/kubernetes/client/models/v1_container_image.py", line 55, in init self.names = names File "/usr/local/lib/python3.8/site-packages/kubernetes/client/models/v1_container_image.py", line 80, in names raise ValueError("Invalid value for names, must not be None") # noqa: E501 ValueError: Invalid value for names, must not be None

Cluster: AKS AKS Version: v1.19.7

Expected behavior

I was expecting it to run with this cluster version of AKS. Though it works perfectly with AKS v1.17.13

danielsagi commented 2 years ago

Hi @RSE132 Thanks for reporting! Ill take a look at this and will get back to you.

shresthsuman commented 2 years ago

@RSE132 @danielsagi same approach, same error. Cluster: 1 master and 3 worker node cluster hosted in hetzner cloud. (v1.21.0)

danielsagi commented 2 years ago

Hi @shresthsuman and @RSE132 Could any of you provide output of a run with --log debug ?

shresthsuman commented 2 years ago

Hi @danielsagi I do not understand which command you mean. Since I am running kube-hunter as a pod inside my cluster. Here are some information -

kube-hunter-deployment.yaml file -

---
apiVersion: batch/v1
kind: Job
metadata:
  name: kube-hunter
spec:
  template:
    spec:
      containers:
        - name: kube-hunter
          image: aquasec/kube-hunter
          command: ["kube-hunter"]
          args: ["--pod"]
      restartPolicy: Never

How am I running it - kubectl apply -f kube-hunter-deployment.yaml

Outputs from logs kubectl logs kube-hunter-94dzj -

2021-07-19 14:30:06,381 INFO kube_hunter.modules.report.collector Discovering Open Kubernetes Services
2021-07-19 14:30:06,389 INFO kube_hunter.modules.discovery.kubernetes_client Attempting to use in cluster Kubernetes config
2021-07-19 14:30:06,392 INFO kube_hunter.modules.report.collector Found vulnerability "Read access to pod's service account token" in Local to Pod (kube-hunter-94dzj)
2021-07-19 14:30:06,393 INFO kube_hunter.modules.report.collector Found vulnerability "Access to pod's secrets" in Local to Pod (kube-hunter-94dzj)
2021-07-19 14:30:06,397 INFO kube_hunter.modules.report.collector Found vulnerability "CAP_NET_RAW Enabled" in Local to Pod (kube-hunter-94dzj)
2021-07-19 14:30:06,426 ERROR kube_hunter.modules.discovery.kubernetes_client Failed to list nodes from Kubernetes
Traceback (most recent call last):
  File "/usr/local/lib/python3.8/site-packages/kube_hunter/modules/discovery/kubernetes_client.py", line 21, in list_all_k8s_cluster_nodes
    ret = client.list_node(watch=False)
  File "/usr/local/lib/python3.8/site-packages/kubernetes/client/api/core_v1_api.py", line 16414, in list_node
    return self.list_node_with_http_info(**kwargs)  # noqa: E501
  File "/usr/local/lib/python3.8/site-packages/kubernetes/client/api/core_v1_api.py", line 16517, in list_node_with_http_info
    return self.api_client.call_api(
  File "/usr/local/lib/python3.8/site-packages/kubernetes/client/api_client.py", line 348, in call_api
    return self.__call_api(resource_path, method,
  File "/usr/local/lib/python3.8/site-packages/kubernetes/client/api_client.py", line 180, in __call_api
    response_data = self.request(
  File "/usr/local/lib/python3.8/site-packages/kubernetes/client/api_client.py", line 373, in request
    return self.rest_client.GET(url,
  File "/usr/local/lib/python3.8/site-packages/kubernetes/client/rest.py", line 239, in GET
    return self.request("GET", url,
  File "/usr/local/lib/python3.8/site-packages/kubernetes/client/rest.py", line 233, in request
    raise ApiException(http_resp=r)
kubernetes.client.exceptions.ApiException: (403)
Reason: Forbidden
HTTP response headers: HTTPHeaderDict({'Cache-Control': 'no-cache, private', 'Content-Type': 'application/json', 'X-Content-Type-Options': 'nosniff', 'X-Kubernetes-Pf-Flowschema-Uid': '7ebda5e1-ae4f-41e7-a9ac-7f06f08a5813', 'X-Kubernetes-Pf-Prioritylevel-Uid': 'de84989f-8afe-4407-bd8b-512f52326091', 'Date': 'Mon, 19 Jul 2021 14:30:06 GMT', 'Content-Length': '277'})
HTTP response body: {"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"nodes is forbidden: User \"system:serviceaccount:default:default\" cannot list resource \"nodes\" in API group \"\" at the cluster scope","reason":"Forbidden","details":{"kind":"nodes"},"code":403}

Vulnerabilities
For further information about a vulnerability, search its ID in: 
https://avd.aquasec.com/
+--------+----------------------+-------------+----------------------+----------------------+----------------------+
| ID     | LOCATION             | CATEGORY    | VULNERABILITY        | DESCRIPTION          | EVIDENCE             |
+--------+----------------------+-------------+----------------------+----------------------+----------------------+
| None   | Local to Pod (kube-  | Access Risk | CAP_NET_RAW Enabled  | CAP_NET_RAW is       |                      |
|        | hunter-94dzj)        |             |                      | enabled by default   |                      |
|        |                      |             |                      | for pods.            |                      |
|        |                      |             |                      |     If an attacker   |                      |
|        |                      |             |                      | manages to           |                      |
|        |                      |             |                      | compromise a pod,    |                      |
|        |                      |             |                      |     they could       |                      |
|        |                      |             |                      | potentially take     |                      |
|        |                      |             |                      | advantage of this    |                      |
|        |                      |             |                      | capability to        |                      |
|        |                      |             |                      | perform network      |                      |
|        |                      |             |                      |     attacks on other |                      |
|        |                      |             |                      | pods running on the  |                      |
|        |                      |             |                      | same node            |                      |
+--------+----------------------+-------------+----------------------+----------------------+----------------------+
| None   | Local to Pod (kube-  | Access Risk | Access to pod's      | Accessing the pod's  | ['/var/run/secrets/k |
|        | hunter-94dzj)        |             | secrets              | secrets within a     | ubernetes.io/service |
|        |                      |             |                      | compromised pod      | account/token', '/va |
|        |                      |             |                      | might disclose       | r/run/secrets/kubern |
|        |                      |             |                      | valuable data to a   | etes.io/serviceaccou |
|        |                      |             |                      | potential attacker   | ...                  |
+--------+----------------------+-------------+----------------------+----------------------+----------------------+
| KHV050 | Local to Pod (kube-  | Access Risk | Read access to pod's | Accessing the pod    | eyJhbGciOiJSUzI1NiIs |
|        | hunter-94dzj)        |             | service account      | service account      | ImtpZCI6IldCSHhfVXY4 |
|        |                      |             | token                | token gives an       | WWNEeWd0Z0Z2N2VjSjFJ |
|        |                      |             |                      | attacker the option  | VG8zY1RWZlhna1FWdWlS |
|        |                      |             |                      | to use the server    | czJmMmcifQ.eyJhdWQiO |
|        |                      |             |                      | API                  | ...                  |
+--------+----------------------+-------------+----------------------+----------------------+----------------------+

Kube Hunter couldn't find any clusters
$ cat kube-hunter-deployment.yaml 
---
apiVersion: batch/v1
kind: Job
metadata:
  name: kube-hunter
spec:
  template:
    spec:
      containers:
        - name: kube-hunter
          image: aquasec/kube-hunter
          command: ["kube-hunter"]
          args: ["--pod"]
      restartPolicy: Never
  backoffLimit: 4

$ kubectl describe job kube-hunter
Name:           kube-hunter
Namespace:      default
Selector:       controller-uid=8370b6bc-ec27-4645-8273-7bfda5e0c00f
Labels:         controller-uid=8370b6bc-ec27-4645-8273-7bfda5e0c00f
                job-name=kube-hunter
Annotations:    <none>
Parallelism:    1
Completions:    1
Start Time:     Mon, 19 Jul 2021 16:30:01 +0200
Completed At:   Mon, 19 Jul 2021 16:30:07 +0200
Duration:       6s
Pods Statuses:  0 Running / 1 Succeeded / 0 Failed
Pod Template:
  Labels:  controller-uid=8370b6bc-ec27-4645-8273-7bfda5e0c00f
           job-name=kube-hunter
  Containers:
   kube-hunter:
    Image:      aquasec/kube-hunter
    Port:       <none>
    Host Port:  <none>
    Command:
      kube-hunter
    Args:
      --pod
    Environment:  <none>
    Mounts:       <none>
  Volumes:        <none>
Events:           <none>
$ kubectl describe pod kube-hunter-94dzj 
Name:         kube-hunter-94dzj
Namespace:    default
Priority:     0
Node:         worker3/xx.xx.xx.xx
Start Time:   Mon, 19 Jul 2021 16:30:01 +0200
Labels:       controller-uid=8370b6bc-ec27-4645-8273-7bfda5e0c00f
              job-name=kube-hunter
Annotations:  <none>
Status:       Succeeded
IP:           192.168.2.130
IPs:
  IP:           192.168.2.130
Controlled By:  Job/kube-hunter
Containers:
  kube-hunter:
    Container ID:  docker://29e6db564406a8301b5eacece780125e6c518a99628b3551d201c21ad66823b5
    Image:         aquasec/kube-hunter
    Image ID:      docker-pullable://aquasec/kube-hunter@sha256:81c9fd2ba98e6a9da3bf2a969fa1f49a1d06b23db5e50a6c7f1660a506f04d37
    Port:          <none>
    Host Port:     <none>
    Command:
      kube-hunter
    Args:
      --pod
    State:          Terminated
      Reason:       Completed
      Exit Code:    0
      Started:      Mon, 19 Jul 2021 16:30:04 +0200
      Finished:     Mon, 19 Jul 2021 16:30:06 +0200
    Ready:          False
    Restart Count:  0
    Environment:    <none>
    Mounts:
      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-j4jx2 (ro)
Conditions:
  Type              Status
  Initialized       True 
  Ready             False 
  ContainersReady   False 
  PodScheduled      True 
Volumes:
  kube-api-access-j4jx2:
    Type:                    Projected (a volume that contains injected data from multiple sources)
    TokenExpirationSeconds:  3607
    ConfigMapName:           kube-root-ca.crt
    ConfigMapOptional:       <nil>
    DownwardAPI:             true
QoS Class:                   BestEffort
Node-Selectors:              <none>
Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:                      <none>
$ kubectl get nodes -o wide
NAME      STATUS   ROLES                  AGE   VERSION   INTERNAL-IP      EXTERNAL-IP   OS-IMAGE             KERNEL-VERSION     CONTAINER-RUNTIME
master1   Ready    control-plane,master   72d   v1.21.0   xxx.xxx.xxx.xxx   <none>        Ubuntu 20.04.2 LTS   5.4.0-72-generic   docker://20.10.6
worker1   Ready    <none>                 72d   v1.21.0   xxx.xxx.xxx.xxx   <none>        Ubuntu 20.04.2 LTS   5.4.0-72-generic   docker://20.10.6
worker2   Ready    <none>                 72d   v1.21.0   xxx.xxx.xxx.xxx    <none>        Ubuntu 20.04.2 LTS   5.4.0-72-generic   docker://20.10.6
worker3   Ready    <none>                 72d   v1.21.0   xxx.xxx.xxx.xxx     <none>        Ubuntu 20.04.2 LTS   5.4.0-72-generic   docker://20.10.6
danielsagi commented 2 years ago

@shresthsuman Sorry for not clarifying. I meant, add the following flags to the yaml: change:

args: ["--pod"]

To:

args: ["--pod", "--log", "debug"]

From what I'm seeing now the behavior is mostly normal. the default service account which kube-hunter gets deployed with does not have permissions to list nodes. so the ERROR is raised (This is considered a bug. we should only display a warning for this.) I also don't think this is the same problem @RSE132 is having. so I would like a debug output from you too (@RSE132 )

However this is just a new feature we added, kube-hunter does not rely on this "list nodes" permission. So if you can please attach debug logs as a previously explained, So I could help you figure out if there is a bug in normal nodes discovery.

shresthsuman commented 2 years ago

Hi @danielsagi

Here are the logs with added flags to the arguments -

$ kubectl logs kube-hunter-4vcx5
2021-07-22 16:06:31,971 DEBUG root <class 'kube_hunter.modules.report.collector.Collector'> subscribed to <class 'kube_hunter.core.events.types.Vulnerability'>
2021-07-22 16:06:31,971 DEBUG root <class 'kube_hunter.modules.report.collector.Collector'> subscribed to <class 'kube_hunter.core.events.types.Service'>
2021-07-22 16:06:31,972 DEBUG root <class 'kube_hunter.modules.report.collector.SendFullReport'> subscribed to <class 'kube_hunter.core.events.types.HuntFinished'>
2021-07-22 16:06:31,972 DEBUG root <class 'kube_hunter.modules.report.collector.StartedInfo'> subscribed to <class 'kube_hunter.core.events.types.HuntStarted'>
2021-07-22 16:06:32,034 DEBUG root <class 'kube_hunter.modules.discovery.apiserver.ApiServiceDiscovery'> subscribed to <class 'kube_hunter.core.events.types.OpenPortEvent'>
2021-07-22 16:06:32,034 DEBUG root <class 'kube_hunter.modules.discovery.apiserver.ApiServiceClassify'> filter subscribed to <class 'kube_hunter.modules.discovery.apiserver.K8sApiService'>
2021-07-22 16:06:32,035 DEBUG root <class 'kube_hunter.modules.discovery.dashboard.KubeDashboard'> subscribed to <class 'kube_hunter.core.events.types.OpenPortEvent'>
2021-07-22 16:06:32,035 DEBUG root <class 'kube_hunter.modules.discovery.etcd.EtcdRemoteAccess'> subscribed to <class 'kube_hunter.core.events.types.OpenPortEvent'>
2021-07-22 16:06:32,720 DEBUG root <class 'kube_hunter.modules.discovery.hosts.FromPodHostDiscovery'> subscribed to <class 'kube_hunter.modules.discovery.hosts.RunningAsPodEvent'>
2021-07-22 16:06:32,721 DEBUG root <class 'kube_hunter.modules.discovery.hosts.HostDiscovery'> subscribed to <class 'kube_hunter.modules.discovery.hosts.HostScanEvent'>
2021-07-22 16:06:32,722 DEBUG root <class 'kube_hunter.modules.discovery.kubectl.KubectlClientDiscovery'> subscribed to <class 'kube_hunter.core.events.types.HuntStarted'>
2021-07-22 16:06:32,723 DEBUG root <class 'kube_hunter.modules.discovery.kubelet.KubeletDiscovery'> subscribed to <class 'kube_hunter.core.events.types.OpenPortEvent'>
2021-07-22 16:06:32,723 DEBUG root <class 'kube_hunter.modules.discovery.ports.PortDiscovery'> subscribed to <class 'kube_hunter.core.events.types.NewHostEvent'>
2021-07-22 16:06:32,724 DEBUG root <class 'kube_hunter.modules.discovery.proxy.KubeProxy'> subscribed to <class 'kube_hunter.core.events.types.OpenPortEvent'>
2021-07-22 16:06:32,734 DEBUG root <class 'kube_hunter.modules.hunting.kubelet.ReadOnlyKubeletPortHunter'> subscribed to <class 'kube_hunter.modules.discovery.kubelet.ReadOnlyKubeletEvent'>
2021-07-22 16:06:32,735 DEBUG root <class 'kube_hunter.modules.hunting.kubelet.SecureKubeletPortHunter'> subscribed to <class 'kube_hunter.modules.discovery.kubelet.SecureKubeletEvent'>
2021-07-22 16:06:32,735 DEBUG root <class 'kube_hunter.modules.hunting.aks.AzureSpnHunter'> subscribed to <class 'kube_hunter.modules.hunting.kubelet.ExposedPodsHandler'>
2021-07-22 16:06:32,736 DEBUG root <class 'kube_hunter.modules.hunting.apiserver.AccessApiServer'> subscribed to <class 'kube_hunter.modules.discovery.apiserver.ApiServer'>
2021-07-22 16:06:32,736 DEBUG root <class 'kube_hunter.modules.hunting.apiserver.AccessApiServerWithToken'> subscribed to <class 'kube_hunter.modules.discovery.apiserver.ApiServer'>
2021-07-22 16:06:32,737 DEBUG root <class 'kube_hunter.modules.hunting.apiserver.ApiVersionHunter'> subscribed to <class 'kube_hunter.modules.discovery.apiserver.ApiServer'>
2021-07-22 16:06:33,100 DEBUG root <class 'kube_hunter.modules.hunting.capabilities.PodCapabilitiesHunter'> subscribed to <class 'kube_hunter.modules.discovery.hosts.RunningAsPodEvent'>
2021-07-22 16:06:33,103 DEBUG root <class 'kube_hunter.modules.hunting.certificates.CertificateDiscovery'> subscribed to <class 'kube_hunter.core.events.types.Service'>
2021-07-22 16:06:33,110 DEBUG root <class 'kube_hunter.modules.hunting.cves.K8sClusterCveHunter'> subscribed to <class 'kube_hunter.core.events.types.K8sVersionDisclosure'>
2021-07-22 16:06:33,110 DEBUG root <class 'kube_hunter.modules.hunting.cves.KubectlCVEHunter'> subscribed to <class 'kube_hunter.modules.discovery.kubectl.KubectlClientEvent'>
2021-07-22 16:06:33,111 DEBUG root <class 'kube_hunter.modules.hunting.dashboard.KubeDashboard'> subscribed to <class 'kube_hunter.modules.discovery.dashboard.KubeDashboardEvent'>
2021-07-22 16:06:33,112 DEBUG root <class 'kube_hunter.modules.hunting.etcd.EtcdRemoteAccess'> subscribed to <class 'kube_hunter.core.events.types.OpenPortEvent'>
2021-07-22 16:06:33,112 DEBUG root <class 'kube_hunter.modules.hunting.mounts.VarLogMountHunter'> subscribed to <class 'kube_hunter.modules.hunting.kubelet.ExposedPodsHandler'>
2021-07-22 16:06:33,113 DEBUG root <class 'kube_hunter.modules.hunting.proxy.KubeProxy'> subscribed to <class 'kube_hunter.modules.discovery.proxy.KubeProxyEvent'>
2021-07-22 16:06:33,113 DEBUG root <class 'kube_hunter.modules.hunting.secrets.AccessSecrets'> subscribed to <class 'kube_hunter.modules.discovery.hosts.RunningAsPodEvent'>
2021-07-22 16:06:33,114 DEBUG kube_hunter.core.events.handler Event <class 'kube_hunter.core.events.types.HuntStarted'> got published to hunter - <class 'kube_hunter.modules.report.collector.StartedInfo'> with <kube_hunter.core.events.types.HuntStarted object at 0x7f68a9de8df0>
2021-07-22 16:06:33,114 DEBUG kube_hunter.core.events.handler Event <class 'kube_hunter.core.events.types.HuntStarted'> got published to hunter - <class 'kube_hunter.modules.discovery.kubectl.KubectlClientDiscovery'> with <kube_hunter.core.events.types.HuntStarted object at 0x7f68a9de8df0>
2021-07-22 16:06:33,114 DEBUG kube_hunter.core.events.handler Executing <class 'kube_hunter.modules.report.collector.StartedInfo'> with {'previous': None, 'hunter': None}
2021-07-22 16:06:33,114 INFO kube_hunter.modules.report.collector Started hunting
2021-07-22 16:06:33,114 INFO kube_hunter.modules.report.collector Discovering Open Kubernetes Services
2021-07-22 16:06:33,115 DEBUG kube_hunter.core.events.handler Executing <class 'kube_hunter.modules.discovery.kubectl.KubectlClientDiscovery'> with {'previous': None, 'hunter': None}
2021-07-22 16:06:33,115 DEBUG kube_hunter.modules.discovery.kubectl Attempting to discover a local kubectl client
2021-07-22 16:06:33,126 DEBUG kube_hunter.core.events.handler Event <class 'kube_hunter.modules.discovery.hosts.RunningAsPodEvent'> got published to hunter - <class 'kube_hunter.modules.discovery.hosts.FromPodHostDiscovery'> with <kube_hunter.modules.discovery.hosts.RunningAsPodEvent object at 0x7f68a99b2e50>
2021-07-22 16:06:33,129 DEBUG kube_hunter.core.events.handler Executing <class 'kube_hunter.modules.discovery.hosts.FromPodHostDiscovery'> with {'name': 'Running from within a pod', 'client_cert': '-----BEGIN CERTIFICATE-----\nMIIC5zCCAc+gAwIBAgIBADANBgkqhkiG9w0BAQsFADAVMRMwEQYDVQQDEwprdWJl\ncm5ldGVzMB4XDTIxMDUxMDExMzMzNVoXDTMxMDUwODExMzMzNVowFTETMBEGA1UE\nAxMKa3ViZXJuZXRlczCCASIwDQYJKoZIhvcNAQEBBQADggEPADCCAQoCggEBANRm\nGen916cgdX5DQEh/hNyPQKaYqqZ5QjJi1y0DcEKDt2Zr18ghPNFL1SKdbR+eB8XA\n6OZTGX/J5Fc6padoLlXA2P4hfJyUf5pTY4qNLS6ZfCHHyu9Ve46ex8rM7F+wituG\nqV1H31Z26NI2xIiJ29fHtNgaQ5qoL/xgYtS0P9etV1K8QXy448uFA6Tkjarl0PE9\n5hXpSOT9K6hz5ADBcD5EDvRfnZG9RvenbQh+oDXpA163wZVK6pSI1bvDkbGfUkjK\nMQ4x8HH4qI5m4YDitGbmrQqwO++2mz+HV3loqc1CEEuzLb9GOblc4ZAV05bA7cub\no0O7izaEIRSIc4BOGp0CAwEAAaNCMEAwDgYDVR0PAQH/BAQDAgKkMA8GA1UdEwEB\n/wQFMAMBAf8wHQYDVR0OBBYEFJQLxOsQj36XIWTPq9AAvmfB5U99MA0GCSqGSIb3\nDQEBCwUAA4IBAQCcD9qRmOj8QlU5HgIVe3MfxdjzbdE9sDmId6ZyPkbGM2YX6xGU\nCWHKlIqTiAq1ZAKGgdhlTLcINVb6sp2jYzlrEU8zf4MYjlpUL+JA17fU1zpCglgC\nTlBNjKhtujFtmAaV7gXB3h8lzGcGp811cWoeKoV4BhP+mUiz0Xt/DaeWXdXNgByX\nUncVyQcXuFtkjNq5/5PptkqEm5wUa7pqfBuGx54bUO/1r6cmJTE/NGrEv/O0y3cg\nYc+ZncT9nGVHc4AGXIwvjWSfSZ28G3DqipQFaNd+6lcBwEBeK7aX9DEijOfMvFSK\nGRUMJeWJTGsLiCHBkHs/64A7/S/5hCioOfOt\n-----END CERTIFICATE-----\n', 'namespace': 'default', 'kubeservicehost': '10.96.0.1', 'auth_token': 'eyJhbGciOiJSUzI1NiIsImtpZCI6IldCSHhfVXY4WWNEeWd0Z0Z2N2VjSjFJVG8zY1RWZlhna1FWdWlSczJmMmcifQ.eyJhdWQiOlsiaHR0cHM6Ly9rdWJlcm5ldGVzLmRlZmF1bHQuc3ZjLmNsdXN0ZXIubG9jYWwiXSwiZXhwIjoxNjU4NTA1OTgyLCJpYXQiOjE2MjY5Njk5ODIsImlzcyI6Imh0dHBzOi8va3ViZXJuZXRlcy5kZWZhdWx0LnN2Yy5jbHVzdGVyLmxvY2FsIiwia3ViZXJuZXRlcy5pbyI6eyJuYW1lc3BhY2UiOiJkZWZhdWx0IiwicG9kIjp7Im5hbWUiOiJrdWJlLWh1bnRlci00dmN4NSIsInVpZCI6ImFlYTU4MGRhLWYyY2UtNDk2Ny05NzkxLWNjMjhmZjIzNjYyNyJ9LCJzZXJ2aWNlYWNjb3VudCI6eyJuYW1lIjoiZGVmYXVsdCIsInVpZCI6IjYzYWE1NDQ1LWJjYzUtNDBjNC05ZGRiLTljZTdiNGZiNDA2NSJ9LCJ3YXJuYWZ0ZXIiOjE2MjY5NzM1ODl9LCJuYmYiOjE2MjY5Njk5ODIsInN1YiI6InN5c3RlbTpzZXJ2aWNlYWNjb3VudDpkZWZhdWx0OmRlZmF1bHQifQ.sHv3AV7JBiRLCfA8iTMJABdKPBjrN3CUebaIbBZhp5e4575asGdCDEJ7iXxomDukfvJpUw3zt0bWoo3Oa0HNkwqgSn60cbQNrUQ0nvN8LyY5Y_vYuq1UF3prtGXfC8KnWsElpZtSaZxgx4Nqr76tFRXezTRalUbTWyQ81MRmJ0WWE4wPtLQlJROkPFpYikuN5NvvOUNkDq98JhX0-_XjIAiayEKb8kNNETXLuTaCTFAFef53_wUNt4CBbCncrPWeOmOjZgYy2SDlqzTNKiK-n5gT1g_kxpbmgU1kN9A4jGFvgzALGOQGHs1iZKLoH6BV0og9L-WTThjmc4Hkb1JXZw'}
2021-07-22 16:06:33,131 DEBUG kube_hunter.modules.discovery.kubernetes_client Attempting to use in cluster Kubernetes config
2021-07-22 16:06:33,131 DEBUG kube_hunter.core.events.handler Executing <class 'kube_hunter.modules.hunting.capabilities.PodCapabilitiesHunter'> with {'name': 'Running from within a pod', 'client_cert': '-----BEGIN CERTIFICATE-----\nMIIC5zCCAc+gAwIBAgIBADANBgkqhkiG9w0BAQsFADAVMRMwEQYDVQQDEwprdWJl\ncm5ldGVzMB4XDTIxMDUxMDExMzMzNVoXDTMxMDUwODExMzMzNVowFTETMBEGA1UE\nAxMKa3ViZXJuZXRlczCCASIwDQYJKoZIhvcNAQEBBQADggEPADCCAQoCggEBANRm\nGen916cgdX5DQEh/hNyPQKaYqqZ5QjJi1y0DcEKDt2Zr18ghPNFL1SKdbR+eB8XA\n6OZTGX/J5Fc6padoLlXA2P4hfJyUf5pTY4qNLS6ZfCHHyu9Ve46ex8rM7F+wituG\nqV1H31Z26NI2xIiJ29fHtNgaQ5qoL/xgYtS0P9etV1K8QXy448uFA6Tkjarl0PE9\n5hXpSOT9K6hz5ADBcD5EDvRfnZG9RvenbQh+oDXpA163wZVK6pSI1bvDkbGfUkjK\nMQ4x8HH4qI5m4YDitGbmrQqwO++2mz+HV3loqc1CEEuzLb9GOblc4ZAV05bA7cub\no0O7izaEIRSIc4BOGp0CAwEAAaNCMEAwDgYDVR0PAQH/BAQDAgKkMA8GA1UdEwEB\n/wQFMAMBAf8wHQYDVR0OBBYEFJQLxOsQj36XIWTPq9AAvmfB5U99MA0GCSqGSIb3\nDQEBCwUAA4IBAQCcD9qRmOj8QlU5HgIVe3MfxdjzbdE9sDmId6ZyPkbGM2YX6xGU\nCWHKlIqTiAq1ZAKGgdhlTLcINVb6sp2jYzlrEU8zf4MYjlpUL+JA17fU1zpCglgC\nTlBNjKhtujFtmAaV7gXB3h8lzGcGp811cWoeKoV4BhP+mUiz0Xt/DaeWXdXNgByX\nUncVyQcXuFtkjNq5/5PptkqEm5wUa7pqfBuGx54bUO/1r6cmJTE/NGrEv/O0y3cg\nYc+ZncT9nGVHc4AGXIwvjWSfSZ28G3DqipQFaNd+6lcBwEBeK7aX9DEijOfMvFSK\nGRUMJeWJTGsLiCHBkHs/64A7/S/5hCioOfOt\n-----END CERTIFICATE-----\n', 'namespace': 'default', 'kubeservicehost': '10.96.0.1', 'auth_token': 'eyJhbGciOiJSUzI1NiIsImtpZCI6IldCSHhfVXY4WWNEeWd0Z0Z2N2VjSjFJVG8zY1RWZlhna1FWdWlSczJmMmcifQ.eyJhdWQiOlsiaHR0cHM6Ly9rdWJlcm5ldGVzLmRlZmF1bHQuc3ZjLmNsdXN0ZXIubG9jYWwiXSwiZXhwIjoxNjU4NTA1OTgyLCJpYXQiOjE2MjY5Njk5ODIsImlzcyI6Imh0dHBzOi8va3ViZXJuZXRlcy5kZWZhdWx0LnN2Yy5jbHVzdGVyLmxvY2FsIiwia3ViZXJuZXRlcy5pbyI6eyJuYW1lc3BhY2UiOiJkZWZhdWx0IiwicG9kIjp7Im5hbWUiOiJrdWJlLWh1bnRlci00dmN4NSIsInVpZCI6ImFlYTU4MGRhLWYyY2UtNDk2Ny05NzkxLWNjMjhmZjIzNjYyNyJ9LCJzZXJ2aWNlYWNjb3VudCI6eyJuYW1lIjoiZGVmYXVsdCIsInVpZCI6IjYzYWE1NDQ1LWJjYzUtNDBjNC05ZGRiLTljZTdiNGZiNDA2NSJ9LCJ3YXJuYWZ0ZXIiOjE2MjY5NzM1ODl9LCJuYmYiOjE2MjY5Njk5ODIsInN1YiI6InN5c3RlbTpzZXJ2aWNlYWNjb3VudDpkZWZhdWx0OmRlZmF1bHQifQ.sHv3AV7JBiRLCfA8iTMJABdKPBjrN3CUebaIbBZhp5e4575asGdCDEJ7iXxomDukfvJpUw3zt0bWoo3Oa0HNkwqgSn60cbQNrUQ0nvN8LyY5Y_vYuq1UF3prtGXfC8KnWsElpZtSaZxgx4Nqr76tFRXezTRalUbTWyQ81MRmJ0WWE4wPtLQlJROkPFpYikuN5NvvOUNkDq98JhX0-_XjIAiayEKb8kNNETXLuTaCTFAFef53_wUNt4CBbCncrPWeOmOjZgYy2SDlqzTNKiK-n5gT1g_kxpbmgU1kN9A4jGFvgzALGOQGHs1iZKLoH6BV0og9L-WTThjmc4Hkb1JXZw'}
2021-07-22 16:06:33,140 DEBUG kube_hunter.modules.hunting.capabilities Passive hunter's trying to open a RAW socket
2021-07-22 16:06:33,140 DEBUG kube_hunter.modules.hunting.capabilities Passive hunter's closing RAW socket
2021-07-22 16:06:33,141 DEBUG kube_hunter.core.events.handler Event <class 'kube_hunter.modules.hunting.capabilities.CapNetRawEnabled'> got published to hunter - <class 'kube_hunter.modules.report.collector.Collector'> with <kube_hunter.modules.hunting.capabilities.CapNetRawEnabled object at 0x7f68a8a8ac10>
2021-07-22 16:06:33,141 DEBUG kube_hunter.core.events.handler Executing <class 'kube_hunter.modules.report.collector.Collector'> with {'vid': 'None', 'component': <class 'kube_hunter.core.types.KubernetesCluster'>, 'category': <class 'kube_hunter.core.types.AccessRisk'>, 'name': 'CAP_NET_RAW Enabled', 'evidence': '', 'role': 'Node', 'previous': <kube_hunter.modules.discovery.hosts.RunningAsPodEvent object at 0x7f68a99b2e50>, 'hunter': <class 'kube_hunter.modules.hunting.capabilities.PodCapabilitiesHunter'>}
2021-07-22 16:06:33,142 INFO kube_hunter.modules.report.collector Found vulnerability "CAP_NET_RAW Enabled" in Local to Pod (kube-hunter-4vcx5)
2021-07-22 16:06:33,130 DEBUG kube_hunter.core.events.handler Event <class 'kube_hunter.modules.discovery.hosts.RunningAsPodEvent'> got published to hunter - <class 'kube_hunter.modules.hunting.capabilities.PodCapabilitiesHunter'> with <kube_hunter.modules.discovery.hosts.RunningAsPodEvent object at 0x7f68a99b2e50>
2021-07-22 16:06:33,142 DEBUG kube_hunter.core.events.handler Event <class 'kube_hunter.modules.discovery.hosts.RunningAsPodEvent'> got published to hunter - <class 'kube_hunter.modules.hunting.secrets.AccessSecrets'> with <kube_hunter.modules.discovery.hosts.RunningAsPodEvent object at 0x7f68a99b2e50>
2021-07-22 16:06:33,142 DEBUG kube_hunter.core.events.handler Executing <class 'kube_hunter.modules.hunting.secrets.AccessSecrets'> with {'name': 'Running from within a pod', 'client_cert': '-----BEGIN CERTIFICATE-----\nMIIC5zCCAc+gAwIBAgIBADANBgkqhkiG9w0BAQsFADAVMRMwEQYDVQQDEwprdWJl\ncm5ldGVzMB4XDTIxMDUxMDExMzMzNVoXDTMxMDUwODExMzMzNVowFTETMBEGA1UE\nAxMKa3ViZXJuZXRlczCCASIwDQYJKoZIhvcNAQEBBQADggEPADCCAQoCggEBANRm\nGen916cgdX5DQEh/hNyPQKaYqqZ5QjJi1y0DcEKDt2Zr18ghPNFL1SKdbR+eB8XA\n6OZTGX/J5Fc6padoLlXA2P4hfJyUf5pTY4qNLS6ZfCHHyu9Ve46ex8rM7F+wituG\nqV1H31Z26NI2xIiJ29fHtNgaQ5qoL/xgYtS0P9etV1K8QXy448uFA6Tkjarl0PE9\n5hXpSOT9K6hz5ADBcD5EDvRfnZG9RvenbQh+oDXpA163wZVK6pSI1bvDkbGfUkjK\nMQ4x8HH4qI5m4YDitGbmrQqwO++2mz+HV3loqc1CEEuzLb9GOblc4ZAV05bA7cub\no0O7izaEIRSIc4BOGp0CAwEAAaNCMEAwDgYDVR0PAQH/BAQDAgKkMA8GA1UdEwEB\n/wQFMAMBAf8wHQYDVR0OBBYEFJQLxOsQj36XIWTPq9AAvmfB5U99MA0GCSqGSIb3\nDQEBCwUAA4IBAQCcD9qRmOj8QlU5HgIVe3MfxdjzbdE9sDmId6ZyPkbGM2YX6xGU\nCWHKlIqTiAq1ZAKGgdhlTLcINVb6sp2jYzlrEU8zf4MYjlpUL+JA17fU1zpCglgC\nTlBNjKhtujFtmAaV7gXB3h8lzGcGp811cWoeKoV4BhP+mUiz0Xt/DaeWXdXNgByX\nUncVyQcXuFtkjNq5/5PptkqEm5wUa7pqfBuGx54bUO/1r6cmJTE/NGrEv/O0y3cg\nYc+ZncT9nGVHc4AGXIwvjWSfSZ28G3DqipQFaNd+6lcBwEBeK7aX9DEijOfMvFSK\nGRUMJeWJTGsLiCHBkHs/64A7/S/5hCioOfOt\n-----END CERTIFICATE-----\n', 'namespace': 'default', 'kubeservicehost': '10.96.0.1', 'auth_token': 'eyJhbGciOiJSUzI1NiIsImtpZCI6IldCSHhfVXY4WWNEeWd0Z0Z2N2VjSjFJVG8zY1RWZlhna1FWdWlSczJmMmcifQ.eyJhdWQiOlsiaHR0cHM6Ly9rdWJlcm5ldGVzLmRlZmF1bHQuc3ZjLmNsdXN0ZXIubG9jYWwiXSwiZXhwIjoxNjU4NTA1OTgyLCJpYXQiOjE2MjY5Njk5ODIsImlzcyI6Imh0dHBzOi8va3ViZXJuZXRlcy5kZWZhdWx0LnN2Yy5jbHVzdGVyLmxvY2FsIiwia3ViZXJuZXRlcy5pbyI6eyJuYW1lc3BhY2UiOiJkZWZhdWx0IiwicG9kIjp7Im5hbWUiOiJrdWJlLWh1bnRlci00dmN4NSIsInVpZCI6ImFlYTU4MGRhLWYyY2UtNDk2Ny05NzkxLWNjMjhmZjIzNjYyNyJ9LCJzZXJ2aWNlYWNjb3VudCI6eyJuYW1lIjoiZGVmYXVsdCIsInVpZCI6IjYzYWE1NDQ1LWJjYzUtNDBjNC05ZGRiLTljZTdiNGZiNDA2NSJ9LCJ3YXJuYWZ0ZXIiOjE2MjY5NzM1ODl9LCJuYmYiOjE2MjY5Njk5ODIsInN1YiI6InN5c3RlbTpzZXJ2aWNlYWNjb3VudDpkZWZhdWx0OmRlZmF1bHQifQ.sHv3AV7JBiRLCfA8iTMJABdKPBjrN3CUebaIbBZhp5e4575asGdCDEJ7iXxomDukfvJpUw3zt0bWoo3Oa0HNkwqgSn60cbQNrUQ0nvN8LyY5Y_vYuq1UF3prtGXfC8KnWsElpZtSaZxgx4Nqr76tFRXezTRalUbTWyQ81MRmJ0WWE4wPtLQlJROkPFpYikuN5NvvOUNkDq98JhX0-_XjIAiayEKb8kNNETXLuTaCTFAFef53_wUNt4CBbCncrPWeOmOjZgYy2SDlqzTNKiK-n5gT1g_kxpbmgU1kN9A4jGFvgzALGOQGHs1iZKLoH6BV0og9L-WTThjmc4Hkb1JXZw'}
2021-07-22 16:06:33,143 DEBUG kube_hunter.core.events.handler Event <class 'kube_hunter.modules.hunting.secrets.ServiceAccountTokenAccess'> got published to hunter - <class 'kube_hunter.modules.report.collector.Collector'> with <kube_hunter.modules.hunting.secrets.ServiceAccountTokenAccess object at 0x7f68a8a98190>
2021-07-22 16:06:33,143 DEBUG kube_hunter.modules.hunting.secrets Trying to access pod's secrets directory
2021-07-22 16:06:33,143 DEBUG kube_hunter.core.events.handler Executing <class 'kube_hunter.modules.report.collector.Collector'> with {'vid': 'KHV050', 'component': <class 'kube_hunter.core.types.KubernetesCluster'>, 'category': <class 'kube_hunter.core.types.AccessRisk'>, 'name': "Read access to pod's service account token", 'evidence': 'eyJhbGciOiJSUzI1NiIsImtpZCI6IldCSHhfVXY4WWNEeWd0Z0Z2N2VjSjFJVG8zY1RWZlhna1FWdWlSczJmMmcifQ.eyJhdWQiOlsiaHR0cHM6Ly9rdWJlcm5ldGVzLmRlZmF1bHQuc3ZjLmNsdXN0ZXIubG9jYWwiXSwiZXhwIjoxNjU4NTA1OTgyLCJpYXQiOjE2MjY5Njk5ODIsImlzcyI6Imh0dHBzOi8va3ViZXJuZXRlcy5kZWZhdWx0LnN2Yy5jbHVzdGVyLmxvY2FsIiwia3ViZXJuZXRlcy5pbyI6eyJuYW1lc3BhY2UiOiJkZWZhdWx0IiwicG9kIjp7Im5hbWUiOiJrdWJlLWh1bnRlci00dmN4NSIsInVpZCI6ImFlYTU4MGRhLWYyY2UtNDk2Ny05NzkxLWNjMjhmZjIzNjYyNyJ9LCJzZXJ2aWNlYWNjb3VudCI6eyJuYW1lIjoiZGVmYXVsdCIsInVpZCI6IjYzYWE1NDQ1LWJjYzUtNDBjNC05ZGRiLTljZTdiNGZiNDA2NSJ9LCJ3YXJuYWZ0ZXIiOjE2MjY5NzM1ODl9LCJuYmYiOjE2MjY5Njk5ODIsInN1YiI6InN5c3RlbTpzZXJ2aWNlYWNjb3VudDpkZWZhdWx0OmRlZmF1bHQifQ.sHv3AV7JBiRLCfA8iTMJABdKPBjrN3CUebaIbBZhp5e4575asGdCDEJ7iXxomDukfvJpUw3zt0bWoo3Oa0HNkwqgSn60cbQNrUQ0nvN8LyY5Y_vYuq1UF3prtGXfC8KnWsElpZtSaZxgx4Nqr76tFRXezTRalUbTWyQ81MRmJ0WWE4wPtLQlJROkPFpYikuN5NvvOUNkDq98JhX0-_XjIAiayEKb8kNNETXLuTaCTFAFef53_wUNt4CBbCncrPWeOmOjZgYy2SDlqzTNKiK-n5gT1g_kxpbmgU1kN9A4jGFvgzALGOQGHs1iZKLoH6BV0og9L-WTThjmc4Hkb1JXZw', 'role': 'Node', 'previous': <kube_hunter.modules.discovery.hosts.RunningAsPodEvent object at 0x7f68a99b2e50>, 'hunter': <class 'kube_hunter.modules.hunting.secrets.AccessSecrets'>}
2021-07-22 16:06:33,144 INFO kube_hunter.modules.report.collector Found vulnerability "Read access to pod's service account token" in Local to Pod (kube-hunter-4vcx5)
2021-07-22 16:06:33,144 DEBUG kube_hunter.modules.discovery.kubectl Could not find kubectl client
2021-07-22 16:06:33,145 DEBUG kube_hunter.core.events.handler Event <class 'kube_hunter.modules.hunting.secrets.SecretsAccess'> got published to hunter - <class 'kube_hunter.modules.report.collector.Collector'> with <kube_hunter.modules.hunting.secrets.SecretsAccess object at 0x7f68a9dfee20>
2021-07-22 16:06:33,145 DEBUG kube_hunter.core.events.handler Executing <class 'kube_hunter.modules.report.collector.Collector'> with {'vid': 'None', 'component': <class 'kube_hunter.core.types.KubernetesCluster'>, 'category': <class 'kube_hunter.core.types.AccessRisk'>, 'name': "Access to pod's secrets", 'evidence': ['/var/run/secrets/kubernetes.io/serviceaccount/ca.crt', '/var/run/secrets/kubernetes.io/serviceaccount/token', '/var/run/secrets/kubernetes.io/serviceaccount/namespace', '/var/run/secrets/kubernetes.io/serviceaccount/..2021_07_22_16_06_22.289284948/token', '/var/run/secrets/kubernetes.io/serviceaccount/..2021_07_22_16_06_22.289284948/namespace', '/var/run/secrets/kubernetes.io/serviceaccount/..2021_07_22_16_06_22.289284948/ca.crt'], 'role': 'Node', 'previous': <kube_hunter.modules.discovery.hosts.RunningAsPodEvent object at 0x7f68a99b2e50>, 'hunter': <class 'kube_hunter.modules.hunting.secrets.AccessSecrets'>}
2021-07-22 16:06:33,145 INFO kube_hunter.modules.report.collector Found vulnerability "Access to pod's secrets" in Local to Pod (kube-hunter-4vcx5)
2021-07-22 16:06:33,172 DEBUG kubernetes.client.rest response body: {"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"nodes is forbidden: User \"system:serviceaccount:default:default\" cannot list resource \"nodes\" in API group \"\" at the cluster scope","reason":"Forbidden","details":{"kind":"nodes"},"code":403}

2021-07-22 16:06:33,173 DEBUG kube_hunter.modules.discovery.kubernetes_client Failed to list nodes from Kubernetes: (403)
Reason: Forbidden
HTTP response headers: HTTPHeaderDict({'Cache-Control': 'no-cache, private', 'Content-Type': 'application/json', 'X-Content-Type-Options': 'nosniff', 'X-Kubernetes-Pf-Flowschema-Uid': '7ebda5e1-ae4f-41e7-a9ac-7f06f08a5813', 'X-Kubernetes-Pf-Prioritylevel-Uid': 'de84989f-8afe-4407-bd8b-512f52326091', 'Date': 'Thu, 22 Jul 2021 16:06:33 GMT', 'Content-Length': '277'})
HTTP response body: {"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"nodes is forbidden: User \"system:serviceaccount:default:default\" cannot list resource \"nodes\" in API group \"\" at the cluster scope","reason":"Forbidden","details":{"kind":"nodes"},"code":403}

2021-07-22 16:06:33,175 DEBUG kube_hunter.modules.discovery.hosts From pod attempting to access Azure Metadata API
2021-07-22 16:06:33,191 DEBUG kube_hunter.modules.discovery.hosts From pod attempting to access AWS Metadata v1 API
2021-07-22 16:06:33,197 DEBUG kube_hunter.modules.discovery.hosts From pod attempting to access aws's metadata v1
2021-07-22 16:06:33,203 DEBUG kube_hunter.core.events.handler list index out of range
Traceback (most recent call last):
  File "/usr/local/lib/python3.8/site-packages/kube_hunter/core/events/handler.py", line 315, in worker
    hook.execute()
  File "/usr/local/lib/python3.8/site-packages/kube_hunter/modules/discovery/hosts.py", line 136, in execute
    subnets, cloud = self.aws_metadata_v1_discovery()
  File "/usr/local/lib/python3.8/site-packages/kube_hunter/modules/discovery/hosts.py", line 229, in aws_metadata_v1_discovery
    address, subnet = (cidr[0], cidr[1])
IndexError: list index out of range
2021-07-22 16:06:33,205 DEBUG kube_hunter.core.events.handler Event <class 'kube_hunter.core.events.types.HuntFinished'> got published to hunter - <class 'kube_hunter.modules.report.collector.SendFullReport'> with <kube_hunter.core.events.types.HuntFinished object at 0x7f68a9dfe580>
2021-07-22 16:06:33,205 DEBUG kube_hunter.core.events.handler Executing <class 'kube_hunter.modules.report.collector.SendFullReport'> with {'previous': None, 'hunter': None}
2021-07-22 16:06:33,208 DEBUG kube_hunter.modules.report.dispatchers Dispatching report via stdout
2021-07-22 16:06:33,208 DEBUG kube_hunter.__main__ Cleaned Queue

Vulnerabilities
For further information about a vulnerability, search its ID in: 
https://avd.aquasec.com/
+--------+----------------------+-------------+----------------------+----------------------+----------------------+
| ID     | LOCATION             | CATEGORY    | VULNERABILITY        | DESCRIPTION          | EVIDENCE             |
+--------+----------------------+-------------+----------------------+----------------------+----------------------+
| None   | Local to Pod (kube-  | Access Risk | CAP_NET_RAW Enabled  | CAP_NET_RAW is       |                      |
|        | hunter-4vcx5)        |             |                      | enabled by default   |                      |
|        |                      |             |                      | for pods.            |                      |
|        |                      |             |                      |     If an attacker   |                      |
|        |                      |             |                      | manages to           |                      |
|        |                      |             |                      | compromise a pod,    |                      |
|        |                      |             |                      |     they could       |                      |
|        |                      |             |                      | potentially take     |                      |
|        |                      |             |                      | advantage of this    |                      |
|        |                      |             |                      | capability to        |                      |
|        |                      |             |                      | perform network      |                      |
|        |                      |             |                      |     attacks on other |                      |
|        |                      |             |                      | pods running on the  |                      |
|        |                      |             |                      | same node            |                      |
+--------+----------------------+-------------+----------------------+----------------------+----------------------+
| None   | Local to Pod (kube-  | Access Risk | Access to pod's      | Accessing the pod's  | ['/var/run/secrets/k |
|        | hunter-4vcx5)        |             | secrets              | secrets within a     | ubernetes.io/service |
|        |                      |             |                      | compromised pod      | account/ca.crt', '/v |
|        |                      |             |                      | might disclose       | ar/run/secrets/kuber |
|        |                      |             |                      | valuable data to a   | netes.io/serviceacco |
|        |                      |             |                      | potential attacker   | ...                  |
+--------+----------------------+-------------+----------------------+----------------------+----------------------+
| KHV050 | Local to Pod (kube-  | Access Risk | Read access to pod's | Accessing the pod    | eyJhbGciOiJSUzI1NiIs |
|        | hunter-4vcx5)        |             | service account      | service account      | ImtpZCI6IldCSHhfVXY4 |
|        |                      |             | token                | token gives an       | WWNEeWd0Z0Z2N2VjSjFJ |
|        |                      |             |                      | attacker the option  | VG8zY1RWZlhna1FWdWlS |
|        |                      |             |                      | to use the server    | czJmMmcifQ.eyJhdWQiO |
|        |                      |             |                      | API                  | ...                  |
+--------+----------------------+-------------+----------------------+----------------------+----------------------+

Kube Hunter couldn't find any clusters
danielsagi commented 2 years ago

Hi @shresthsuman I see there might be a bug with aws metadata api discovery which might be the reason kube-hunter can't find your nodes.

Can you exec into a pod in your cluster in issue some curl commands I'll supply? This would help me in figuring out the source of the problem.

danielsagi commented 2 years ago

@shresthsuman Thanks for your debug logs, I found that there was a big unwanted behavior in our hosts discovery when running on aws! Until now if kube-hunter detected you run inside AWS, he will only try to discover through the metadata api. In your case it seems it failed, so kube-hunter just stopped the discovery process whatsoever.

Anyways, I fixed that and I plan to release a version for this. Before I do that, and run it through our testing process. If it is possible at your side, It would help a lot if you could run this new version and provide the debug log again.

I uploaded the fix to my private docker repo:

danielsagi/kube-hunter:bugfix-aws
shresthsuman commented 2 years ago

Hi @danielsagi, just fyi, we are running our K8s cluster on Hetzner servers.

For the next steps, I will use your private docker repo and provide you the debug log again.

Also, if you want, please share the curl commands.

shresthsuman commented 2 years ago

Here are the logs for danielsagi/kube-hunter:bugfix-aws:

https://raw.githubusercontent.com/shresthsuman/kube-hunter-logs/main/logs.txt