Open mrunalinir opened 3 years ago
Hi @mrunalinir When running kube-hunter in interface mode, kube-hunter iterates on all of your locally accessible subnets to find nodes. When running in pod mode, it does the same, but it also manually accessing the predefined service ip address of the api server (from an environ 'mounted' inside the pod). This could probably explain the extra node you're seeing. Behind the scenes this probably means that 2 ip's you're seeing relates to a single worker node. you can just access it by using two ip addresses. (usually by directly accessing the default gateway in your pod, and by accessing the service ip)
If you are still curios about this, feel free to share your logs for me to better help you understand this behavior :)
Hey @danielsagi on further scrutiny, I noticed that in pod mode, it is detecting 2 kubelet APIs (one is of the IP of the node and the other is a random IP similar to it which does not map to any other node in the cluster) in addition to the APIServer. I wanted to confirm if in pod mode, it only checks the node it is running on (which is maybe accessible by 2 IPs) in addition to the API server
I have an eks cluster at 172.20.x.x with 3 worker nodes running. When I use the interface mode on the cluster network, it's listing the 3 nodes at the worker node locations and an additional node at 192.68.x.x and 2 others. Additionally, in pod mode it is showing 4 nodes at 172.20.x.x, 2 of the worker node ips and one of the additional ips from the locations found in interface mode. I was wondering what the additional nodes could be and how come the listed nodes are different in each mode?