aquasecurity / kube-hunter

Hunt for security weaknesses in Kubernetes clusters
Apache License 2.0
4.66k stars 578 forks source link

Cluttered JSON output in KH v0.5.2 run as K8s pod #465

Closed danielpacak closed 2 years ago

danielpacak commented 3 years ago

What happened

Integrated KH with Starboard so it runs with JSON output format as K8s pod. Then we parse the logs. I realised that in v0.5.2 parsing JSON output fails due to stack trace added by KH before the actual JSON content:

2021-06-26 14:08:07,371 ERROR kube_hunter.modules.discovery.kubernetes_client Failed to initiate Kubernetes client
Traceback (most recent call last):
  File "/usr/local/lib/python3.8/site-packages/kube_hunter/modules/discovery/kubernetes_client.py", line 13, in list_all_k8s_cluster_nodes
    kubernetes.config.load_incluster_config()
  File "/usr/local/lib/python3.8/site-packages/kubernetes/config/incluster_config.py", line 118, in load_incluster_config
    InClusterConfigLoader(
  File "/usr/local/lib/python3.8/site-packages/kubernetes/config/incluster_config.py", line 54, in load_and_set
    self._load_config()
  File "/usr/local/lib/python3.8/site-packages/kubernetes/config/incluster_config.py", line 73, in _load_config
    raise ConfigException("Service token file does not exists.")
kubernetes.config.config_exception.ConfigException: Service token file does not exists.
{"nodes": [], "services": [], "vulnerabilities": [{"location": "Local to Pod (359a3672-3ed9-40df-a3bd-b24c1cf585a6-qr95w)", "vid": "None", "category": "Access Risk", "severity": "low", "vulnerability": "CAP_NET_RAW Enabled", "description": "CAP_NET_RAW is enabled by default for pods.\n    If an attacker manages to compromise a pod,\n    they could potentially take advantage of this capability to perform network\n    attacks on other pods running on the same node", "evidence": "", "avd_reference": "https://avd.aquasec.com/kube-hunter/none/", "hunter": "Pod Capabilities Hunter"}]}

Expected behavior

Error or valid JSON output. Not both. Otherwise it's hard to determine whether KH failed or succeeded.
I did not bump into such issue with previous v0.4.1

danielsagi commented 2 years ago

@danielpacak Thats because of a bug in a new feature weve added. But anyways, by your question I understand you run kube-hunter with --log error. A better practice that will also solve this issue is running with --log none. This way even if we have some error internally with kube-hunter it will not mess up the report output

danielpacak commented 2 years ago

Got it. Thanks for the hint. We'll update the log level appropriately.