Closed cloudpea closed 4 years ago
Hi @cloudpea, thanks for raising this issue.
Can you supply some information about the way kube-hunter is run (i.e. execution flags) and an output log (preferably in debug level)?
If you are using --pod
than kube-hunter will try to access the metadata server to get network information. This step will fail since the instance metadata is blocked by policy.
kube-hunter should then fallback and scan it's local network (pod network) for hosts.
If it doesn't find any service on its local network then it prints the message you get.
I assume that the k8s components (control plane & kubelet) are not part of the pod subnet and this is the reason for failing to find a cluster. This can be proven by examining your setup (network-wise).
If this is the case and you want to get a full scan result as you had before, try using --cidr
and specify your nodes subnet.
I guess we can think about a way to overcome this limitation and retrieve the node subnet from some other source (such as traceroute).
@cloudpea Closing for now. Please reopen if you got any more related questions.
What are you trying to achieve
KHV003 was flagged as a vulnerability when we first ran kube-hunter therefore we have enabled the blockInstanceMetadata flag in our aad-pod-identity helm chart to resolve this.
https://github.com/Azure/aad-pod-identity/blob/master/docs/readmes/README.featureflags.md#block-instance-metadata-flag
Upon running kube-hunter again the tool is now not able to scan the cluster and presents the error: Kube Hunter couldn't find any clusters
Is this expected behaviour when kube-hunter cannot access the metadata service?