kubernetes-sigs / security-profiles-operator

The Kubernetes Security Profiles Operator
Apache License 2.0
694 stars 107 forks source link

AppArmor does not work #2008

Closed sybadm closed 5 months ago

sybadm commented 10 months ago

As AppArmor not in GA so not sure if it is BUG or feature request

What happened:

AppArmor does not work on AKS per the steps in installation manual. I have tested this on vanilla Kubernetes 1.28.2 and AKS 1.27.3

What you expected to happen:

Expect AppArmor in effect

How to reproduce it (as minimally and precisely as possible):

kubectl -nsecurity-profiles-operator patch spod spod  --type=merge -p='{"spec":{"webhookOptions":[{"name":"binding.spo.io","namespaceSelector":{"matchExpressions":[{"key":"control-plane","operator":"DoesNotExist"}]}},{"name":"recording.spo.io","namespaceSelector":{"matchExpressions":[{"key":"control-plane","operator":"DoesNotExist"}]}}]}}'

kubectl -n security-profiles-operator patch spod spod --type=merge -p '{"spec":{"verbosity":1}}'

kubectl -n security-profiles-operator patch spod spod --type=merge -p '{"spec":{"enableAppArmor":true}}'

AppArmorProfile.yaml

---
apiVersion: security-profiles-operator.x-k8s.io/v1alpha1
kind: AppArmorProfile
metadata:
  name: test-profile
  annotations:
    description: Block writing to any files in the disk.
spec:
  policy: |
    #include <tunables/global>

    profile test-profile flags=(attach_disconnected) {
      #include <abstractions/base>

      file,

      # Deny all file writes.
      deny /** w,
    }

Deployment.yaml Following tested with localhost/ as below as well as without

apiVersion: apps/v1
kind: Deployment
metadata:
  name: test-pod
  annotations:
    container.apparmor.security.beta.kubernetes.io/test-container: localhost/test-profile
spec:
  replicas: 1
  selector:
    matchLabels:
      app: nginx
  strategy: {}
  template:
    metadata:
      creationTimestamp: null
      labels:
        app: nginx
    spec:
      containers:
      - image: nginx
        name: test-container

No error on writes

# kubectl get pods 
NAME                        READY   STATUS    RESTARTS   AGE
test-pod-6494887549-d6t68   2/2     Running   0          12s
# kubectl exec -it test-pod-6494887549-d6t68 -- bash
root@test-pod-6494887549-d6t68:/# mkdir kl
root@test-pod-6494887549-d6t68:/# rmdir kl
root@test-pod-6494887549-d6t68:/# exit

Anything else we need to know?:

Environment:

K8S: 1.28.2 AKS: 1.27.3

saschagrunert commented 10 months ago

@pjbgf do you have any insights here?

sybadm commented 10 months ago

some info from spod logs

I1207 17:39:51.453204 992474 enricher.go:507] "audit" logger="log-enricher" timestamp="1701970789.495:212734" type="apparmor" node="aks-systempool-32724526-vmss00001l" namespace="security-profiles-operator" pod="spod-czdng" container="security-profiles-operator" executable="security-profil" pid=992430 apparmor="STATUS" operation="profile_replace" profile="unconfined" name="test-profile"

sybadm commented 10 months ago

have I missed anything to enable AppArmor which is not in the document or something. I'm sure many people should be using it already

saschagrunert commented 10 months ago

@sybadm is it possible for you to access the node and see if apparmor has loaded the profile?

pjbgf commented 10 months ago

@sybadm would you be able to share logs from the spod pod? It would be good to get a glimpse of any apparmor related messages on your sys logs as well.

sybadm commented 10 months ago

@sybadm is it possible for you to access the node and see if apparmor has loaded the profile?

Not sure what location AppArmor should go but I dont see anything at default location of SPO

$ kubectl debug  node/aks-i2mrpsge2np-16288669-vmss0000ft  -it --image=mcr.microsoft.com/dotnet/runtime-deps:6.0
Creating debugging pod node-debugger-aks-i2mrpsge2np-16288669-vmss0000ft-nmzh6 with container debugger on node aks-i2mrpsge2np-16288669-vmss0000ft.
If you don't see a command prompt, try pressing enter.
root@aks-i2mrpsge2np-16288669-vmss0000FT:/# chroot /host
# cd /var/lib/security-profiles-operator
# ls -l
total 28
drwxr--r-- 2 65535 65535  4096 Dec  7 20:24 default
drwxr--r-- 2 65535 65535 12288 Dec  8 09:19 app-dvlp-i2
-rw-r--r-- 1 root  root     33 Dec  6 15:43 kubelet-config.json
drwxr--r-- 2 65535 65535  4096 Dec  6 14:57 seccomp
drwxr--r-- 2 65535 65535  4096 Dec  6 14:49 security-profiles-operator
sybadm commented 10 months ago

Spod-ds.log @pjbgf spod ds logs attached

saschagrunert commented 10 months ago

I tried it on AKS and the profile itself seems to be applied as well as loaded:

root@aks-userpool-24947339-vmss000000:/# apparmor_status
apparmor module is loaded.
14 profiles are loaded.
14 profiles are in enforce mode.
   /usr/bin/man
   /usr/lib/NetworkManager/nm-dhcp-client.action
   /usr/lib/NetworkManager/nm-dhcp-helper
   /usr/lib/connman/scripts/dhclient-script
   /usr/sbin/chronyd
   /{,usr/}sbin/dhclient
   cri-containerd.apparmor.d
   lsb_release
   man_filter
   man_groff
   nvidia_modprobe
   nvidia_modprobe//kmod
   tcpdump
   test-profile
0 profiles are in complain mode.
0 profiles are in kill mode.
0 profiles are in unconfined mode.
23 processes have profiles defined.
23 processes are in enforce mode.
   /usr/local/bin/cloud-node-manager (4059) cri-containerd.apparmor.d
   /livenessprobe (4117) cri-containerd.apparmor.d
   /livenessprobe (4119) cri-containerd.apparmor.d
   /csi-node-driver-registrar (4185) cri-containerd.apparmor.d
   /csi-node-driver-registrar (4187) cri-containerd.apparmor.d
   /coredns (5480) cri-containerd.apparmor.d
   /usr/bin/azurepolicyaddon (5677) cri-containerd.apparmor.d
   /usr/bin/bash (11794) cri-containerd.apparmor.d
   /usr/bin/inotifywait (11832) cri-containerd.apparmor.d
   /usr/sbin/crond (12179) cri-containerd.apparmor.d
   /usr/sbin/mdsd (13264) cri-containerd.apparmor.d
   /usr/sbin/MetricsExtension (14004) cri-containerd.apparmor.d
   /opt/microsoft/otelcollector/otelcollector (14008) cri-containerd.apparmor.d
   /usr/bin/telegraf (14137) cri-containerd.apparmor.d
   /usr/bin/fluent-bit (14141) cri-containerd.apparmor.d
   /usr/bin/inotifywait (14145) cri-containerd.apparmor.d
   /busybin/sleep (14148) cri-containerd.apparmor.d
   /usr/sbin/nginx (36950) cri-containerd.apparmor.d
   /usr/bin/bash (46940) cri-containerd.apparmor.d
   /usr/bin/dash (47015) cri-containerd.apparmor.d
   /usr/bin/bash (47056) cri-containerd.apparmor.d
   /usr/bin/bash (53971) cri-containerd.apparmor.d
   /usr/sbin/aa-status (56840) cri-containerd.apparmor.d
0 processes are in complain mode.
0 processes are unconfined but have a profile defined.
0 processes are in mixed mode.
0 processes are in kill mode.

Is something within the profile not correct?

sybadm commented 10 months ago

I tried it on AKS and the profile itself seems to be applied as well as loaded:

root@aks-userpool-24947339-vmss000000:/# apparmor_status
apparmor module is loaded.
14 profiles are loaded.
14 profiles are in enforce mode.
   /usr/bin/man
   /usr/lib/NetworkManager/nm-dhcp-client.action
   /usr/lib/NetworkManager/nm-dhcp-helper
   /usr/lib/connman/scripts/dhclient-script
   /usr/sbin/chronyd
   /{,usr/}sbin/dhclient
   cri-containerd.apparmor.d
   lsb_release
   man_filter
   man_groff
   nvidia_modprobe
   nvidia_modprobe//kmod
   tcpdump
   test-profile
0 profiles are in complain mode.
0 profiles are in kill mode.
0 profiles are in unconfined mode.
23 processes have profiles defined.
23 processes are in enforce mode.
   /usr/local/bin/cloud-node-manager (4059) cri-containerd.apparmor.d
   /livenessprobe (4117) cri-containerd.apparmor.d
   /livenessprobe (4119) cri-containerd.apparmor.d
   /csi-node-driver-registrar (4185) cri-containerd.apparmor.d
   /csi-node-driver-registrar (4187) cri-containerd.apparmor.d
   /coredns (5480) cri-containerd.apparmor.d
   /usr/bin/azurepolicyaddon (5677) cri-containerd.apparmor.d
   /usr/bin/bash (11794) cri-containerd.apparmor.d
   /usr/bin/inotifywait (11832) cri-containerd.apparmor.d
   /usr/sbin/crond (12179) cri-containerd.apparmor.d
   /usr/sbin/mdsd (13264) cri-containerd.apparmor.d
   /usr/sbin/MetricsExtension (14004) cri-containerd.apparmor.d
   /opt/microsoft/otelcollector/otelcollector (14008) cri-containerd.apparmor.d
   /usr/bin/telegraf (14137) cri-containerd.apparmor.d
   /usr/bin/fluent-bit (14141) cri-containerd.apparmor.d
   /usr/bin/inotifywait (14145) cri-containerd.apparmor.d
   /busybin/sleep (14148) cri-containerd.apparmor.d
   /usr/sbin/nginx (36950) cri-containerd.apparmor.d
   /usr/bin/bash (46940) cri-containerd.apparmor.d
   /usr/bin/dash (47015) cri-containerd.apparmor.d
   /usr/bin/bash (47056) cri-containerd.apparmor.d
   /usr/bin/bash (53971) cri-containerd.apparmor.d
   /usr/sbin/aa-status (56840) cri-containerd.apparmor.d
0 processes are in complain mode.
0 processes are unconfined but have a profile defined.
0 processes are in mixed mode.
0 processes are in kill mode.

Is something within the profile not correct?

thats great. I can see it as well. Not sure if the profile has bug?

root@aks-i2mrpsge2np-16288669-vmss0000FT:/# chroot /host
# apparmor_status
apparmor module is loaded.
14 profiles are loaded.
14 profiles are in enforce mode.
   /usr/bin/man
   /usr/lib/NetworkManager/nm-dhcp-client.action
   /usr/lib/NetworkManager/nm-dhcp-helper
   /usr/lib/connman/scripts/dhclient-script
   /usr/sbin/chronyd
   /{,usr/}sbin/dhclient
   cri-containerd.apparmor.d
   lsb_release
   man_filter
   man_groff
   nvidia_modprobe
   nvidia_modprobe//kmod
   tcpdump
   test-profile
0 profiles are in complain mode.
0 profiles are in kill mode.
0 profiles are in unconfined mode.
17 processes have profiles defined.
17 processes are in enforce mode.
   /usr/local/bin/azure-cns (1727719) cri-containerd.apparmor.d
   /opt/cni/bin/azure-vnet-telemetry (1727879) cri-containerd.apparmor.d
   /usr/bin/bash (1731109) cri-containerd.apparmor.d
   /go/main (1731132) cri-containerd.apparmor.d
   /usr/bin/bash (2059349) cri-containerd.apparmor.d
   /usr/bin/dash (2060944) cri-containerd.apparmor.d
   /usr/sbin/aa-status (2061033) cri-containerd.apparmor.d
   /usr/local/bin/install-cni (3045668) cri-containerd.apparmor.d
   /csi-node-driver-registrar (3047381) cri-containerd.apparmor.d
   /livenessprobe (3047458) cri-containerd.apparmor.d
   /bin/secrets-store-csi-driver-provider-azure (3048247) cri-containerd.apparmor.d
   /usr/local/bin/cloud-node-manager (3050245) cri-containerd.apparmor.d
   /livenessprobe (3050352) cri-containerd.apparmor.d
   /csi-node-driver-registrar (3050390) cri-containerd.apparmor.d
   /livenessprobe (3051617) cri-containerd.apparmor.d
   /csi-node-driver-registrar (3051657) cri-containerd.apparmor.d
   /pause (3053937) cri-containerd.apparmor.d
0 processes are in complain mode.
0 processes are unconfined but have a profile defined.
0 processes are in mixed mode.
0 processes are in kill mode.
sybadm commented 10 months ago

also profile file looks good to me

# pwd
/etc/apparmor.d
# ls -lrt
total 56
-rw-r--r-- 1 root root 1592 Nov 16  2021 usr.sbin.rsyslogd
drwxr-xr-x 2 root root 4096 Dec 30  2021 force-complain
-rw-r--r-- 1 root root 2628 Feb  8  2022 usr.sbin.chronyd
-rw-r--r-- 1 root root 3448 Mar 17  2022 usr.bin.man
-rw-r--r-- 1 root root 1189 Oct 19  2022 nvidia_modprobe
-rw-r--r-- 1 root root 1339 Oct 19  2022 lsb_release
-rw-r--r-- 1 root root 3500 Jan 31  2023 sbin.dhclient
-rw-r--r-- 1 root root 1518 Feb 10  2023 usr.bin.tcpdump
drwxr-xr-x 2 root root 4096 Oct  4 02:09 disable
drwxr-xr-x 2 root root 4096 Oct  4 02:09 abi
drwxr-xr-x 4 root root 4096 Oct  4 02:09 abstractions
drwxr-xr-x 5 root root 4096 Oct  4 02:09 tunables
drwxr-xr-x 2 root root 4096 Oct  4 20:07 local
-rw-r--r-- 1 root root  162 Dec  8 11:10 test-profile

# cat test-profile
#include <tunables/global>

profile test-profile flags=(attach_disconnected) {
  #include <abstractions/base>

  file,

  # Deny all file writes.
  deny /** w,
}
sybadm commented 10 months ago

may be related to this https://github.com/MicrosoftDocs/azure-docs/issues/114123

k8s-triage-robot commented 7 months ago

The Kubernetes project currently lacks enough contributors to adequately respond to all issues.

This bot triages un-triaged issues according to the following rules:

You can:

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle stale

k8s-triage-robot commented 6 months ago

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues.

This bot triages un-triaged issues according to the following rules:

You can:

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle rotten

k8s-triage-robot commented 5 months ago

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.

This bot triages issues according to the following rules:

You can:

Please send feedback to sig-contributor-experience at kubernetes/community.

/close not-planned

k8s-ci-robot commented 5 months ago

@k8s-triage-robot: Closing this issue, marking it as "Not Planned".

In response to [this](https://github.com/kubernetes-sigs/security-profiles-operator/issues/2008#issuecomment-2096016613): >The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs. > >This bot triages issues according to the following rules: >- After 90d of inactivity, `lifecycle/stale` is applied >- After 30d of inactivity since `lifecycle/stale` was applied, `lifecycle/rotten` is applied >- After 30d of inactivity since `lifecycle/rotten` was applied, the issue is closed > >You can: >- Reopen this issue with `/reopen` >- Mark this issue as fresh with `/remove-lifecycle rotten` >- Offer to help out with [Issue Triage][1] > >Please send feedback to sig-contributor-experience at [kubernetes/community](https://github.com/kubernetes/community). > >/close not-planned > >[1]: https://www.kubernetes.dev/docs/guide/issue-triage/ Instructions for interacting with me using PR comments are available [here](https://git.k8s.io/community/contributors/guide/pull-requests.md). If you have questions or suggestions related to my behavior, please file an issue against the [kubernetes-sigs/prow](https://github.com/kubernetes-sigs/prow/issues/new?title=Prow%20issue:) repository.