I ran kubectl get nodes to list nodes and there was 1 node in my cluster.
NAME STATUS ROLES AGE VERSION
k8s Ready control-plane,master 34d v1.23.7
And then I ran ./kube-hunter.py --k8s-auto-discover-nodes --kubeconfig /root/.kube/configto auto discover node in k8s cluster but there were 2 nodes in node table(ps:kube_hunter.modules.discovery.kubernetes_client Listed 1 nodes in the cluster).However k8s was the hostname of 192.168.1.133This will lead to duplication of detected services and vulnerabilities.
2022-11-11 02:48:11,574 INFO kube_hunter.modules.report.collector Started hunting
2022-11-11 02:48:11,574 INFO kube_hunter.modules.report.collector Discovering Open Kubernetes Services
2022-11-11 02:48:11,685 INFO kube_hunter.modules.discovery.kubernetes_client Listed 1 nodes in the cluster
2022-11-11 02:48:11,697 INFO kube_hunter.modules.report.collector Found open service "Etcd" at 192.168.1.133:2379
2022-11-11 02:48:11,721 INFO kube_hunter.modules.report.collector Found open service "Etcd" at k8s:2379
2022-11-11 02:48:11,815 INFO kube_hunter.modules.report.collector Found vulnerability "K8s Version Disclosure" in 192.168.1.133:6443
2022-11-11 02:48:11,820 INFO kube_hunter.modules.report.collector Found open service "API Server" at k8s:6443
2022-11-11 02:48:11,824 INFO kube_hunter.modules.report.collector Found vulnerability "K8s Version Disclosure" in k8s:6443
2022-11-11 02:48:11,831 INFO kube_hunter.modules.report.collector Found open service "API Server" at 192.168.1.133:6443
2022-11-11 02:48:11,928 INFO kube_hunter.modules.report.collector Found open service "Kubelet API" at 192.168.1.133:10250
2022-11-11 02:48:11,932 INFO kube_hunter.modules.report.collector Found open service "Kubelet API" at k8s:10250
Nodes
+-------------+---------------+
| TYPE | LOCATION |
+-------------+---------------+
| Node/Master | k8s |
+-------------+---------------+
| Node/Master | 192.168.1.133 |
+-------------+---------------+
Detected Services
+-------------+---------------------+----------------------+
| SERVICE | LOCATION | DESCRIPTION |
+-------------+---------------------+----------------------+
| Kubelet API | k8s:10250 | The Kubelet is the |
| | | main component in |
| | | every Node, all pod |
| | | operations goes |
| | | through the kubelet |
+-------------+---------------------+----------------------+
| Kubelet API | 192.168.1.133:10250 | The Kubelet is the |
| | | main component in |
| | | every Node, all pod |
| | | operations goes |
| | | through the kubelet |
+-------------+---------------------+----------------------+
| Etcd | k8s:2379 | Etcd is a DB that |
| | | stores cluster's |
| | | data, it contains |
| | | configuration and |
| | | current |
| | | state |
| | | information, and |
| | | might contain |
| | | secrets |
+-------------+---------------------+----------------------+
| Etcd | 192.168.1.133:2379 | Etcd is a DB that |
| | | stores cluster's |
| | | data, it contains |
| | | configuration and |
| | | current |
| | | state |
| | | information, and |
| | | might contain |
| | | secrets |
+-------------+---------------------+----------------------+
| API Server | k8s:6443 | The API server is in |
| | | charge of all |
| | | operations on the |
| | | cluster. |
+-------------+---------------------+----------------------+
| API Server | 192.168.1.133:6443 | The API server is in |
| | | charge of all |
| | | operations on the |
| | | cluster. |
+-------------+---------------------+----------------------+
Vulnerabilities
For further information about a vulnerability, search its ID in:
https://avd.aquasec.com/
+--------+--------------------+----------------------+----------------------+----------------------+----------+
| ID | LOCATION | MITRE CATEGORY | VULNERABILITY | DESCRIPTION | EVIDENCE |
+--------+--------------------+----------------------+----------------------+----------------------+----------+
| KHV002 | k8s:6443 | Initial Access // | K8s Version | The kubernetes | v1.23.7 |
| | | Exposed sensitive | Disclosure | version could be | |
| | | interfaces | | obtained from the | |
| | | | | /version endpoint | |
+--------+--------------------+----------------------+----------------------+----------------------+----------+
| KHV002 | 192.168.1.133:6443 | Initial Access // | K8s Version | The kubernetes | v1.23.7 |
| | | Exposed sensitive | Disclosure | version could be | |
| | | interfaces | | obtained from the | |
| | | | | /version endpoint | |
+--------+--------------------+----------------------+----------------------+----------------------+----------+
Finally,I set the log to debug level and got the kubernetes client rest response body like this.So I think the reason is that the type of address is not judged.
What happened
I ran
kubectl get nodes
to list nodes and there was 1 node in my cluster.And then I ran
./kube-hunter.py --k8s-auto-discover-nodes --kubeconfig /root/.kube/config
to auto discover node in k8s cluster but there were 2 nodes in node table(ps:kube_hunter.modules.discovery.kubernetes_client Listed 1 nodes in the cluster).Howeverk8s
was the hostname of192.168.1.133
This will lead to duplication of detected services and vulnerabilities.Finally,I set the log to debug level and got the kubernetes client rest response body like this.So I think the reason is that the type of address is not judged.
Expected behavior
The correct number can be obtained by judging the type before listing the k8s cluster nodes.