Open 0xElessar opened 9 months ago
I will take a proper look next week but it seems like a manifestation of this issue: https://github.com/gruntwork-io/terratest/issues/976#issuecomment-1139051034
Could just be a case of including.
import _ "k8s.io/client-go/plugin/pkg/client/auth"
in https://github.com/DataDog/KubeHound/blob/main/pkg/collector/k8s_api.go
I will take a proper look next week but it seems like a manifestation of this issue: gruntwork-io/terratest#976 (comment)
Could just be a case of including.
import _ "k8s.io/client-go/plugin/pkg/client/auth"
in https://github.com/DataDog/KubeHound/blob/main/pkg/collector/k8s_api.go
Thank you, @d0g0x01 .
This helped a bit, but another error came up, which I have no clue how to deal with :(
FATA[0001] raw data ingest: collector client creation: getting kubernetes config: The azure auth plugin has been removed.
Please use the https://github.com/Azure/kubelogin kubectl/client-go credential plugin instead.
See https://kubernetes.io/docs/reference/access-authn-authz/authentication/#client-go-credential-plugins for further details component=kubehound run_id=8b0ab0e7-341a-4b7e-bf1a-e2588df908dc service=kubehound
thank you
Apologies for reminder, @d0g0x01 . But is there any quick fix? I will lose the access to the current cluster in 2-3 days (my assessment time is ending), but I am really interested to see your tool in action in this environment.
thank you
I dont think so via the api collector :( BUT If you have kubectl access you could use the offline mode. 1) collect the data using the https://github.com/DataDog/KubeHound/blob/main/scripts/collectors/collect.sh script (or similar) 2) configure kubehound to use the file collector https://github.com/DataDog/KubeHound/blob/main/configs/etc/kubehound-reference.yaml#L21
Much appreciated. Thank you for that, @d0g0x01 !
hello @d0g0x01 ,
that almost worked!
Unfortunately, it seems the collector is aware of namespaces and creates all the folder structure for it, but the ingestor tool is not aware of it. It looks for the roles* files in the main folder instead of in namespace subfolders. Also, there is some error about: "could not write in bulk to mongo: context canceled", which I am not sure how to deal with it too.
./kubehound.sh run
INFO[0001] Creating file collector from directory /opt/kubehound/ component=kubehound run_id=a6deb72e-a125-485f-9ac1-b89bef17c5bf service=kubehound
INFO[0001] Loaded local-file-collector collector client component=kubehound run_id=a6deb72e-a125-485f-9ac1-b89bef17c5bf service=kubehound
[...]
INFO[0001] Running ingest k8s-cluster-role-ingest component=kubehound run_id=a6deb72e-a125-485f-9ac1-b89bef17c5bf service=kubehound
INFO[0001] Running ingest k8s-role-ingest component=kubehound run_id=a6deb72e-a125-485f-9ac1-b89bef17c5bf service=kubehound
ERRO[0001] k8s-role-ingest run: file collector stream roles: read file /opt/kubehound/test-cluster/roles.rbac.authorization.k8s.io.json: open /opt/kubehound/test-cluster/roles.rbac.authorization.k8s.io.json: no such file or directory component=kubehound run_id=a6deb72e-a125-485f-9ac1-b89bef17c5bf service=kubehound
ERRO[0001] k8s-cluster-role-ingest run: 1 error occurred:
* could not write in bulk to mongo: context canceled
component=kubehound run_id=a6deb72e-a125-485f-9ac1-b89bef17c5bf service=kubehound
ERRO[0001] ingestor sequence core-pipeline run: group k8s-role-group ingest: file collector stream roles: read file /opt/kubehound/test-cluster/roles.rbac.authorization.k8s.io.json: open /opt/kubehound/test-cluster/roles.rbac.authorization.k8s.io.json: no such file or directory component=kubehound run_id=a6deb72e-a125-485f-9ac1-b89bef17c5bf service=kubehound
Error: raw data ingest: ingest: group k8s-role-group ingest: file collector stream roles: read file /opt/kubehound/test-cluster/roles.rbac.authorization.k8s.io.json: open /opt/kubehound/test-cluster/roles.rbac.authorization.k8s.io.json: no such file or directory
Usage:
kubehound-local [flags]
Flags:
-c, --config string application config file
-h, --help help for kubehound-local
FATA[0001] raw data ingest: ingest: group k8s-role-group ingest: file collector stream roles: read file /opt/kubehound/test-cluster/roles.rbac.authorization.k8s.io.json: open /opt/kubehound/test-cluster/roles.rbac.authorization.k8s.io.json: no such file or directory component=kubehound run_id=a6deb72e-a125-485f-9ac1-b89bef17c5bf service=kubehound
Anyway, it was really close. Thank you for all the assistance.
Sorry - its not a fully support feature yet and we mainly use it for debugging/dev. However if you move/rename the files to the structure defined here you should be good
https://github.com/DataDog/KubeHound/blob/main/pkg/collector/file.go#L24
I fully understand. Don't worry. The offline ingestion is another great feature to have though.
Thank you for quick responses and help, @d0g0x01 !!!
Hello,
thanks for amazing tool. Unfortunately, I cannot use it in my current assessment. My kubeconfig works fine, I can get all information from kubectl I want, but when I run './kubehound.sh run', I am getting this error:
Any suggestion, please?
thanks