Closed shivaccuknox closed 5 months ago
When the adapter and kubearmor are installed separately..
$ helm list -A
NAME NAMESPACE REVISION UPDATED STATUS CHART APP VERSION
kubearmor-operator kubearmor 1 2024-07-02 13:30:02.056137089 +0000 UTC deployed kubearmor-operator-v1.3.4 v1.3.4
nimbus-kubearmor nimbus 1 2024-07-02 13:29:25.838576952 +0000 UTC deployed nimbus-kubearmor-0.1.3 0.1.2
nimbus-operator nimbus 1 2024-07-02 13:22:36.209345493 +0000 UTC deployed nimbus-0.1.2 0.1.1
The kubearmor pods are in the kubearmor namespace
$ kg -n kubearmor pods
NAME READY STATUS RESTARTS AGE
kubearmor-bpf-containerd-98c2c-7v2gt 1/1 Running 0 18s
kubearmor-bpf-containerd-98c2c-mtrbv 1/1 Running 0 17s
kubearmor-controller-6569f787d5-lcxqj 2/2 Running 0 18s
kubearmor-operator-55db48ffcb-whfbm 1/1 Running 0 34s
kubearmor-relay-67fb6dd7bb-bp7mm 1/1 Running 0 18s
kubearmor-snitch-2kq27-dvrll 0/1 Completed 0 23s
kubearmor-snitch-sm8sv-c4g7m 0/1 Completed 0 23s
The adapters, nimbus-operator, kyverno are in the nimbus namespace
$ kg -n nimbus pods
NAME READY STATUS RESTARTS AGE
kyverno-admission-controller-895c9dc9b-9qpkb 1/1 Running 0 80m
kyverno-background-controller-7bd77f8957-8gddx 1/1 Running 0 80m
kyverno-cleanup-admission-reports-28665520-q7nbr 0/1 Completed 0 3m4s
kyverno-cleanup-cluster-admission-reports-28665520-hdjpw 0/1 Completed 0 3m4s
kyverno-cleanup-cluster-ephemeral-reports-28665520-wlp9g 0/1 Completed 0 3m4s
kyverno-cleanup-controller-84cb4cbc8d-wlsfh 1/1 Running 0 80m
kyverno-cleanup-ephemeral-reports-28665520-g64xq 0/1 Completed 0 3m4s
kyverno-cleanup-update-requests-28665520-f9cml 0/1 Completed 0 3m4s
kyverno-reports-controller-7579899b66-qfxfp 1/1 Running 0 80m
nimbus-k8tls-rzdb6 1/1 Running 0 80m
nimbus-kubearmor-pfhdv 1/1 Running 0 73m
nimbus-kyverno-l22t9 1/1 Running 0 80m
nimbus-netpol-64tz7 1/1 Running 0 80m
nimbus-operator-58d454b94b-54hmr 1/1 Running 0 80m
The dns manipulation intent, and prevent execution from temp folder intent work fine
$ k exec -it -n free5gc-cp busybox -- sh
~ # echo "hello" >> /etc/resolv.conf
sh: write error: Permission denied
~ # cp /bin/wget /tmp/
cp: can't create '/tmp/wget': Permission denied
~ # exit
command terminated with exit code 1
With the combined helm chart, kubearmor is installed in the nimbus namespace. Also, snitch pods are crashing..
$ kg -n nimbus pods
NAME READY STATUS RESTARTS AGE
kubearmor-controller-699bbcd48d-sdg85 2/2 Running 0 70s
kubearmor-operator-55db48ffcb-gpfj8 1/1 Running 0 77s
kubearmor-relay-5749b9c86c-l7ng4 1/1 Running 0 59s
kubearmor-snitch-bqpvm-5x84q 0/1 CrashLoopBackOff 3 (21s ago) 65s
kubearmor-snitch-vkz8v-g5szh 0/1 CrashLoopBackOff 3 (24s ago) 65s
kyverno-admission-controller-895c9dc9b-zmmhn 1/1 Running 0 77s
kyverno-background-controller-7bd77f8957-pmsdk 1/1 Running 0 77s
kyverno-cleanup-controller-84cb4cbc8d-dsrww 1/1 Running 0 77s
kyverno-reports-controller-7579899b66-kjzhf 1/1 Running 0 77s
nimbus-kubearmor-zwk6l 1/1 Running 0 77s
nimbus-kyverno-4g2pn 1/1 Running 0 77s
nimbus-netpol-zj69f 1/1 Running 0 77s
nimbus-operator-58d454b94b-tdtb5 1/1 Running 0 77s
Installation is working as expected for me on my k0s cluster.
❯ helm upgrade --dependency-update --install nimbus-operator 5gsec/nimbus -n nimbus --create-namespace
Release "nimbus-operator" does not exist. Installing it now.
coalesce.go:237: warning: skipped value for nimbus-kubearmor.autoDeploy: Not a table.
coalesce.go:237: warning: skipped value for nimbus-kyverno.autoDeploy: Not a table.
NAME: nimbus-operator
LAST DEPLOYED: Wed Jul 3 13:45:47 2024
NAMESPACE: nimbus
STATUS: deployed
REVISION: 1
❯ kg -n nimbus po
NAME READY STATUS RESTARTS AGE
kubearmor-bpf-containerd-e6e27-x82wx 1/1 Running 0 80s
kubearmor-controller-6b77bc6479-dbvp7 2/2 Running 0 5m23s
kubearmor-operator-5cdbc5f9-5hpmp 1/1 Running 0 8m48s
kubearmor-relay-f4f9b5cfd-7w9qr 1/1 Running 0 6m15s
kyverno-admission-controller-7875989d78-j6xmf 1/1 Running 0 8m48s
kyverno-background-controller-6c7776f6f8-rmmxc 1/1 Running 0 8m48s
kyverno-cleanup-admission-reports-28666580-s88hk 0/1 Completed 0 4m38s
kyverno-cleanup-cluster-admission-reports-28666580-jllts 0/1 Completed 0 4m38s
kyverno-cleanup-cluster-ephemeral-reports-28666580-qlcs9 0/1 Completed 0 4m38s
kyverno-cleanup-controller-6cc568f64f-xr4rb 1/1 Running 0 8m48s
kyverno-cleanup-ephemeral-reports-28666580-64rwt 0/1 Completed 0 4m38s
kyverno-cleanup-update-requests-28666580-zwzsg 0/1 Completed 0 4m38s
kyverno-reports-controller-7f5bd6b7bf-5jxtx 1/1 Running 0 8m48s
nimbus-kubearmor-xwktr 1/1 Running 0 8m48s
nimbus-kyverno-zgwht 1/1 Running 0 8m48s
nimbus-netpol-m2j5r 1/1 Running 0 8m48s
nimbus-operator-6c799ddf6-nvwrj 1/1 Running 0 8m48s
There were some stale clusterroles, clusterrolebindings present in the kubearmor ns. Once these were deleted, the install went through, and the intents - prevent execution from temp folder, dm are working fine.
$ k exec -it -n free5gc-cp busybox -- sh
/ # cp /bin/wget /tmp/
/ # /tmp/wget
sh: /tmp/wget: Permission denied
/ # echo "hello world" >> /etc/resolv.conf
sh: write error: Permission denied
Even now, if the nimbus operator is uninstalled, there are resources which are not yet deleted.
$ helm uninstall -n nimbus nimbus-operator
release "nimbus-operator" uninstalled
$ kg clusterrolebinding | grep kubearmor
kubearmor-clusterrolebinding ClusterRole/kubearmor-clusterrole 6m25s
kubearmor-controller-clusterrolebinding ClusterRole/kubearmor-controller-clusterrole 6m24s
kubearmor-controller-proxy-rolebinding ClusterRole/kubearmor-controller-proxy-role 6m23s
kubearmor-relay-clusterrolebinding ClusterRole/kubearmor-relay-clusterrole 6m24s
kubearmor-snitch-binding ClusterRole/kubearmor-snitch 6m23s
$ kg role,rolebinding -n nimbus | grep kubearmor
role.rbac.authorization.k8s.io/kubearmor-controller-leader-election-role 2024-07-03T15:05:26Z
rolebinding.rbac.authorization.k8s.io/kubearmor-controller-leader-election-rolebinding Role/kubearmor-controller-leader-election-role 8m16s
shiv@nephio-demo-5:~$ kg role,rolebinding -n nimbus | grep kubearmor
role.rbac.authorization.k8s.io/kubearmor-controller-leader-election-role 2024-07-03T15:05:26Z
rolebinding.rbac.authorization.k8s.io/kubearmor-controller-leader-election-rolebinding Role/kubearmor-controller-leader-election-role 13m
intents involving the kubearmor are not working when kubearmor is installed using the combined helm chart. These intents work when the nimbus-kubearmor adapter and kubearmor are installed separately.
The combined helm chart creates all the operators/adapters, and kubearmor in the nimbus ns.