aquasecurity / kube-hunter

Hunt for security weaknesses in Kubernetes clusters
Apache License 2.0
4.7k stars 581 forks source link

Kube Hunter couldn't find any clusters - Pod Mode - AWS EKS #358

Open luciano-nbs opened 4 years ago

luciano-nbs commented 4 years ago

What happened

Running the standard kube-hunter pod/job on an AWS EKS cluster returns the message Kube Hunter couldn't find any clusters

I've verified that i can manually connect to the API endpoint from within the pod using curl and the secrets available in the pod.

/kube-hunter # KUBE_TOKEN=$(cat /var/run/secrets/kubernetes.io/serviceaccount/token)
/kube-hunter # curl -sSk -H "Authorization: Bearer $KUBE_TOKEN" https://$KUBERNETES_SERVICE_HOST:$KUBERNETES_PORT_443_TCP_PORT/version
{
  "major": "1",
  "minor": "16+",
  "gitVersion": "v1.16.8-eks-e16311",
  "gitCommit": "e163110a04dcb2f39c3325af96d019b4925419eb",
  "gitTreeState": "clean",
  "buildDate": "2020-03-27T22:37:12Z",
  "goVersion": "go1.13.8",
  "compiler": "gc",
  "platform": "linux/amd64"

With DEBUG on kube-hunter enabled i see the following exceptions...

2020-06-16 10:07:13,890 INFO kube_hunter.modules.report.collector Found vulnerability "Access to pod's secrets" in Local to Pod (kube-hunter-cert-manager-m7h4z)
2020-06-16 10:07:13,892 DEBUG urllib3.connectionpool Starting new HTTP connection (1): 169.254.169.254:80
2020-06-16 10:07:13,894 DEBUG urllib3.connectionpool http://169.254.169.254:80 "GET /metadata/instance?api-version=2017-08-01 HTTP/1.1" 404 337
2020-06-16 10:07:13,896 DEBUG urllib3.connectionpool Starting new HTTPS connection (1): canhazip.com:443
2020-06-16 10:07:13,928 DEBUG kube_hunter.core.events.handler ('Connection aborted.', ConnectionResetError(104, 'Connection reset by peer'))
Traceback (most recent call last):
  File "/usr/local/lib/python3.8/site-packages/urllib3/connectionpool.py", line 665, in urlopen
    httplib_response = self._make_request(
  File "/usr/local/lib/python3.8/site-packages/urllib3/connectionpool.py", line 376, in _make_request
    self._validate_conn(conn)
  File "/usr/local/lib/python3.8/site-packages/urllib3/connectionpool.py", line 994, in _validate_conn
    conn.connect()
  File "/usr/local/lib/python3.8/site-packages/urllib3/connection.py", line 352, in connect
    self.sock = ssl_wrap_socket(
  File "/usr/local/lib/python3.8/site-packages/urllib3/util/ssl_.py", line 370, in ssl_wrap_socket
    return context.wrap_socket(sock, server_hostname=server_hostname)
  File "/usr/local/lib/python3.8/ssl.py", line 500, in wrap_socket
    return self.sslsocket_class._create(
  File "/usr/local/lib/python3.8/ssl.py", line 1040, in _create
    self.do_handshake()
  File "/usr/local/lib/python3.8/ssl.py", line 1309, in do_handshake
    self._sslobj.do_handshake()
ConnectionResetError: [Errno 104] Connection reset by peer

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
  File "/usr/local/lib/python3.8/site-packages/requests/adapters.py", line 439, in send
    resp = conn.urlopen(
  File "/usr/local/lib/python3.8/site-packages/urllib3/connectionpool.py", line 719, in urlopen
    retries = retries.increment(
  File "/usr/local/lib/python3.8/site-packages/urllib3/util/retry.py", line 400, in increment
    raise six.reraise(type(error), error, _stacktrace)
  File "/usr/local/lib/python3.8/site-packages/urllib3/packages/six.py", line 734, in reraise
    raise value.with_traceback(tb)
  File "/usr/local/lib/python3.8/site-packages/urllib3/connectionpool.py", line 665, in urlopen
    httplib_response = self._make_request(
  File "/usr/local/lib/python3.8/site-packages/urllib3/connectionpool.py", line 376, in _make_request
    self._validate_conn(conn)
  File "/usr/local/lib/python3.8/site-packages/urllib3/connectionpool.py", line 994, in _validate_conn
    conn.connect()
  File "/usr/local/lib/python3.8/site-packages/urllib3/connection.py", line 352, in connect
    self.sock = ssl_wrap_socket(
  File "/usr/local/lib/python3.8/site-packages/urllib3/util/ssl_.py", line 370, in ssl_wrap_socket
    return context.wrap_socket(sock, server_hostname=server_hostname)
  File "/usr/local/lib/python3.8/ssl.py", line 500, in wrap_socket
    return self.sslsocket_class._create(
  File "/usr/local/lib/python3.8/ssl.py", line 1040, in _create
    self.do_handshake()
  File "/usr/local/lib/python3.8/ssl.py", line 1309, in do_handshake
    self._sslobj.do_handshake()
urllib3.exceptions.ProtocolError: ('Connection aborted.', ConnectionResetError(104, 'Connection reset by peer'))

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
  File "/usr/local/lib/python3.8/site-packages/kube_hunter/core/events/handler.py", line 134, in worker
    hook.execute()
  File "/usr/local/lib/python3.8/site-packages/kube_hunter/modules/discovery/hosts.py", line 107, in execute
    subnets, ext_ip = self.traceroute_discovery()
  File "/usr/local/lib/python3.8/site-packages/kube_hunter/modules/discovery/hosts.py", line 140, in traceroute_discovery
    external_ip = requests.get("https://canhazip.com", timeout=config.network_timeout).text
  File "/usr/local/lib/python3.8/site-packages/requests/api.py", line 76, in get
    return request('get', url, params=params, **kwargs)
  File "/usr/local/lib/python3.8/site-packages/requests/api.py", line 61, in request
    return session.request(method=method, url=url, **kwargs)
  File "/usr/local/lib/python3.8/site-packages/requests/sessions.py", line 530, in request
    resp = self.send(prep, **send_kwargs)
  File "/usr/local/lib/python3.8/site-packages/requests/sessions.py", line 643, in send
    r = adapter.send(request, **kwargs)
  File "/usr/local/lib/python3.8/site-packages/requests/adapters.py", line 498, in send
    raise ConnectionError(err, request=request)
requests.exceptions.ConnectionError: ('Connection aborted.', ConnectionResetError(104, 'Connection reset by peer'))
2020-06-16 10:07:13,931 DEBUG kube_hunter.core.events.handler Event <class 'kube_hunter.core.events.types.HuntFinished'> got published with <kube_hunter.core.events.types.HuntFinished object at 0x7f7574c10760>
2020-06-16 10:07:13,931 DEBUG kube_hunter.core.events.handler Executing <class 'kube_hunter.modules.report.collector.SendFullReport'> with {'previous': None, 'hunter': None}
2020-06-16 10:07:13,947 DEBUG kube_hunter.modules.report.dispatchers Dispatching report via stdout
2020-06-16 10:07:13,947 DEBUG __main__ Cleaned Queue

Expected behavior

I should see the cluster/services vulnerabilities printed.

lizrice commented 4 years ago

I think this has been fixed as part of #342, but you've seen it because the kube-hunter image was broken and didn't get updated for a while. This is now sorted out and the latest image on Docker Hub got built two days ago, so this should be corrected now.

The logs show the following line

  File "/usr/local/lib/python3.8/site-packages/kube_hunter/modules/discovery/hosts.py", line 140, in traceroute_discovery
    external_ip = requests.get("https://canhazip.com", timeout=config.network_timeout).text

but the call to canhaszip.com was removed as part of #342.

@luciano-nbs please could you try it again, making sure the image is refreshed, and let us know how you get on?

thinksabin commented 4 years ago

hi Liz,

Im using the master branch with latest commit (commit 2428e2e869b7e07d5cc154d25ebeaceb4cc2085f) and running the pod.yaml in the eks cluster still gives the message of cluster not being found. Despite that message, few information disclosure related issues are being flagged for example; explosure of token inside the pod, /var/run/secrets... Is this the normal behaviour expected? Thanks.

lizrice commented 3 years ago

I don't think this is specific to EKS - I've seen the same on a kind cluster running with --pod:

 k logs kube-hunter-hcg98  
2020-09-03 14:15:19,483 INFO kube_hunter.modules.report.collector Started hunting
2020-09-03 14:15:19,483 INFO kube_hunter.modules.report.collector Discovering Open Kubernetes Services
2020-09-03 14:15:19,494 INFO kube_hunter.modules.report.collector Found vulnerability "CAP_NET_RAW Enabled" in Local to Pod (kube-hunter-hcg98)
2020-09-03 14:15:19,495 INFO kube_hunter.modules.report.collector Found vulnerability "Read access to pod's service account token" in Local to Pod (kube-hunter-hcg98)
2020-09-03 14:15:19,496 INFO kube_hunter.modules.report.collector Found vulnerability "Access to pod's secrets" in Local to Pod (kube-hunter-hcg98)

Vulnerabilities
For further information about a vulnerability, search its ID in:
https://github.com/aquasecurity/kube-hunter/tree/master/docs/_kb
+--------+----------------------+-------------+----------------------+----------------------+----------------------+
| ID     | LOCATION             | CATEGORY    | VULNERABILITY        | DESCRIPTION          | EVIDENCE             |
+--------+----------------------+-------------+----------------------+----------------------+----------------------+
| None   | Local to Pod (kube-  | Access Risk | CAP_NET_RAW Enabled  | CAP_NET_RAW is       |                      |
|        | hunter-hcg98)        |             |                      | enabled by default   |                      |
|        |                      |             |                      | for pods.            |                      |
|        |                      |             |                      |     If an attacker   |                      |
|        |                      |             |                      | manages to           |                      |
|        |                      |             |                      | compromise a pod,    |                      |
|        |                      |             |                      |     they could       |                      |
|        |                      |             |                      | potentially take     |                      |
|        |                      |             |                      | advantage of this    |                      |
|        |                      |             |                      | capability to        |                      |
|        |                      |             |                      | perform network      |                      |
|        |                      |             |                      |     attacks on other |                      |
|        |                      |             |                      | pods running on the  |                      |
|        |                      |             |                      | same node            |                      |
+--------+----------------------+-------------+----------------------+----------------------+----------------------+
| None   | Local to Pod (kube-  | Access Risk | Access to pod's      |  Accessing the pod's | ['/var/run/secrets/k |
|        | hunter-hcg98)        |             | secrets              | secrets within a     | ubernetes.io/service |
|        |                      |             |                      | compromised pod      | ...                  |
|        |                      |             |                      | might disclose       |                      |
|        |                      |             |                      | valuable data to a   |                      |
|        |                      |             |                      | potential attacker   |                      |
+--------+----------------------+-------------+----------------------+----------------------+----------------------+
| KHV050 | Local to Pod (kube-  | Access Risk | Read access to pod's |  Accessing the pod   | eyJhbGciOiJSUzI1NiIs |
|        | hunter-hcg98)        |             | service account      | service account      | ImtpZCI6IiJ9.eyJpc3M |
|        |                      |             | token                | token gives an       | ...                  |
|        |                      |             |                      | attacker the option  |                      |
|        |                      |             |                      | to use the server    |                      |
|        |                      |             |                      | API                  |                      |
+--------+----------------------+-------------+----------------------+----------------------+----------------------+

Kube Hunter couldn't find any clusters
magnologan commented 3 years ago

Yeah, I had similar issue on EKS today. The message appears on the bottom after the vulns are reported just like @lizrice posted above. Nothing major. Seems like normal behavior but the message at the bottom is a big confusing indeed.

danielsagi commented 3 years ago

@magnologan Could you please run the master branch again in your setup with --log debug ? I would like to see why this is happening. Are you able to get the version from within the new master branch pod using curl?

leozheng2000 commented 3 years ago

i got same issue for eks : ths debug info: @danielsagi The Job "kube-hunter" is invalid: spec.template: Invalid value: core.PodTemplateSpec{ObjectMeta:v1.ObjectMeta{Name:"", GenerateName:"", Namespace:"", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(time.Location)(nil)}}, DeletionTimestamp:(v1.Time)(nil), DeletionGracePeriodSeconds:(int64)(nil), Labels:map[string]string{"controller-uid":"7065ea70-693d-414b-a150-d2f0d030922f", "job-name":"kube-hunter"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Initializers:(v1.Initializers)(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:core.PodSpec{Volumes:[]core.Volume(nil), InitContainers:[]core.Container(nil), Containers:[]core.Container{core.Container{Name:"kube-hunter", Image:"aquasec/kube-hunter", Command:[]string{"kube-hunter"}, Args:[]string{"--interface --log debug"}, WorkingDir:"", Ports:[]core.ContainerPort(nil), EnvFrom:[]core.EnvFromSource(nil), Env:[]core.EnvVar(nil), Resources:core.ResourceRequirements{Limits:core.ResourceList(nil), Requests:core.ResourceList(nil)}, VolumeMounts:[]core.VolumeMount(nil), VolumeDevices:[]core.VolumeDevice(nil), LivenessProbe:(core.Probe)(nil), ReadinessProbe:(core.Probe)(nil), Lifecycle:(core.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"Always", SecurityContext:(core.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}}, RestartPolicy:"Never", TerminationGracePeriodSeconds:(int64)(0xc0468d9f28), ActiveDeadlineSeconds:(int64)(nil), DNSPolicy:"ClusterFirst", NodeSelector:map[string]string(nil), ServiceAccountName:"", AutomountServiceAccountToken:(bool)(nil), NodeName:"", SecurityContext:(core.PodSecurityContext)(0xc03ebf1260), ImagePullSecrets:[]core.LocalObjectReference(nil), Hostname:"", Subdomain:"", Affinity:(core.Affinity)(nil), SchedulerName:"default-scheduler", Tolerations:[]core.Toleration(nil), HostAliases:[]core.HostAlias(nil), PriorityClassName:"", Priority:(int32)(nil), PreemptionPolicy:(core.PreemptionPolicy)(nil), DNSConfig:(core.PodDNSConfig)(nil), ReadinessGates:[]core.PodReadinessGate(nil), RuntimeClassName:(string)(nil), EnableServiceLinks:(bool)(nil)}}: field is immutable

btw, this k8s version is 1.15

danielsagi commented 3 years ago

Hi @leozheng2000 Your issue is unrelated, you get a problem in the yaml file. you need to delete the job before you create a new one.

nullfieldio commented 2 years ago

So is there an update for this? Is the message just a red herring if it spits out vulnerabilities beforehand like https://github.com/aquasecurity/kube-hunter/issues/358#issuecomment-686530347