Closed sybadm closed 5 months ago
@pjbgf do you have any insights here?
some info from spod logs
I1207 17:39:51.453204 992474 enricher.go:507] "audit" logger="log-enricher" timestamp="1701970789.495:212734" type="apparmor" node="aks-systempool-32724526-vmss00001l" namespace="security-profiles-operator" pod="spod-czdng" container="security-profiles-operator" executable="security-profil" pid=992430 apparmor="STATUS" operation="profile_replace" profile="unconfined" name="test-profile"
have I missed anything to enable AppArmor which is not in the document or something. I'm sure many people should be using it already
@sybadm is it possible for you to access the node and see if apparmor has loaded the profile?
@sybadm would you be able to share logs from the spod
pod? It would be good to get a glimpse of any apparmor related messages on your sys logs as well.
@sybadm is it possible for you to access the node and see if apparmor has loaded the profile?
Not sure what location AppArmor should go but I dont see anything at default location of SPO
$ kubectl debug node/aks-i2mrpsge2np-16288669-vmss0000ft -it --image=mcr.microsoft.com/dotnet/runtime-deps:6.0
Creating debugging pod node-debugger-aks-i2mrpsge2np-16288669-vmss0000ft-nmzh6 with container debugger on node aks-i2mrpsge2np-16288669-vmss0000ft.
If you don't see a command prompt, try pressing enter.
root@aks-i2mrpsge2np-16288669-vmss0000FT:/# chroot /host
# cd /var/lib/security-profiles-operator
# ls -l
total 28
drwxr--r-- 2 65535 65535 4096 Dec 7 20:24 default
drwxr--r-- 2 65535 65535 12288 Dec 8 09:19 app-dvlp-i2
-rw-r--r-- 1 root root 33 Dec 6 15:43 kubelet-config.json
drwxr--r-- 2 65535 65535 4096 Dec 6 14:57 seccomp
drwxr--r-- 2 65535 65535 4096 Dec 6 14:49 security-profiles-operator
Spod-ds.log @pjbgf spod ds logs attached
I tried it on AKS and the profile itself seems to be applied as well as loaded:
root@aks-userpool-24947339-vmss000000:/# apparmor_status
apparmor module is loaded.
14 profiles are loaded.
14 profiles are in enforce mode.
/usr/bin/man
/usr/lib/NetworkManager/nm-dhcp-client.action
/usr/lib/NetworkManager/nm-dhcp-helper
/usr/lib/connman/scripts/dhclient-script
/usr/sbin/chronyd
/{,usr/}sbin/dhclient
cri-containerd.apparmor.d
lsb_release
man_filter
man_groff
nvidia_modprobe
nvidia_modprobe//kmod
tcpdump
test-profile
0 profiles are in complain mode.
0 profiles are in kill mode.
0 profiles are in unconfined mode.
23 processes have profiles defined.
23 processes are in enforce mode.
/usr/local/bin/cloud-node-manager (4059) cri-containerd.apparmor.d
/livenessprobe (4117) cri-containerd.apparmor.d
/livenessprobe (4119) cri-containerd.apparmor.d
/csi-node-driver-registrar (4185) cri-containerd.apparmor.d
/csi-node-driver-registrar (4187) cri-containerd.apparmor.d
/coredns (5480) cri-containerd.apparmor.d
/usr/bin/azurepolicyaddon (5677) cri-containerd.apparmor.d
/usr/bin/bash (11794) cri-containerd.apparmor.d
/usr/bin/inotifywait (11832) cri-containerd.apparmor.d
/usr/sbin/crond (12179) cri-containerd.apparmor.d
/usr/sbin/mdsd (13264) cri-containerd.apparmor.d
/usr/sbin/MetricsExtension (14004) cri-containerd.apparmor.d
/opt/microsoft/otelcollector/otelcollector (14008) cri-containerd.apparmor.d
/usr/bin/telegraf (14137) cri-containerd.apparmor.d
/usr/bin/fluent-bit (14141) cri-containerd.apparmor.d
/usr/bin/inotifywait (14145) cri-containerd.apparmor.d
/busybin/sleep (14148) cri-containerd.apparmor.d
/usr/sbin/nginx (36950) cri-containerd.apparmor.d
/usr/bin/bash (46940) cri-containerd.apparmor.d
/usr/bin/dash (47015) cri-containerd.apparmor.d
/usr/bin/bash (47056) cri-containerd.apparmor.d
/usr/bin/bash (53971) cri-containerd.apparmor.d
/usr/sbin/aa-status (56840) cri-containerd.apparmor.d
0 processes are in complain mode.
0 processes are unconfined but have a profile defined.
0 processes are in mixed mode.
0 processes are in kill mode.
Is something within the profile not correct?
I tried it on AKS and the profile itself seems to be applied as well as loaded:
root@aks-userpool-24947339-vmss000000:/# apparmor_status apparmor module is loaded. 14 profiles are loaded. 14 profiles are in enforce mode. /usr/bin/man /usr/lib/NetworkManager/nm-dhcp-client.action /usr/lib/NetworkManager/nm-dhcp-helper /usr/lib/connman/scripts/dhclient-script /usr/sbin/chronyd /{,usr/}sbin/dhclient cri-containerd.apparmor.d lsb_release man_filter man_groff nvidia_modprobe nvidia_modprobe//kmod tcpdump test-profile 0 profiles are in complain mode. 0 profiles are in kill mode. 0 profiles are in unconfined mode. 23 processes have profiles defined. 23 processes are in enforce mode. /usr/local/bin/cloud-node-manager (4059) cri-containerd.apparmor.d /livenessprobe (4117) cri-containerd.apparmor.d /livenessprobe (4119) cri-containerd.apparmor.d /csi-node-driver-registrar (4185) cri-containerd.apparmor.d /csi-node-driver-registrar (4187) cri-containerd.apparmor.d /coredns (5480) cri-containerd.apparmor.d /usr/bin/azurepolicyaddon (5677) cri-containerd.apparmor.d /usr/bin/bash (11794) cri-containerd.apparmor.d /usr/bin/inotifywait (11832) cri-containerd.apparmor.d /usr/sbin/crond (12179) cri-containerd.apparmor.d /usr/sbin/mdsd (13264) cri-containerd.apparmor.d /usr/sbin/MetricsExtension (14004) cri-containerd.apparmor.d /opt/microsoft/otelcollector/otelcollector (14008) cri-containerd.apparmor.d /usr/bin/telegraf (14137) cri-containerd.apparmor.d /usr/bin/fluent-bit (14141) cri-containerd.apparmor.d /usr/bin/inotifywait (14145) cri-containerd.apparmor.d /busybin/sleep (14148) cri-containerd.apparmor.d /usr/sbin/nginx (36950) cri-containerd.apparmor.d /usr/bin/bash (46940) cri-containerd.apparmor.d /usr/bin/dash (47015) cri-containerd.apparmor.d /usr/bin/bash (47056) cri-containerd.apparmor.d /usr/bin/bash (53971) cri-containerd.apparmor.d /usr/sbin/aa-status (56840) cri-containerd.apparmor.d 0 processes are in complain mode. 0 processes are unconfined but have a profile defined. 0 processes are in mixed mode. 0 processes are in kill mode.
Is something within the profile not correct?
thats great. I can see it as well. Not sure if the profile has bug?
root@aks-i2mrpsge2np-16288669-vmss0000FT:/# chroot /host
# apparmor_status
apparmor module is loaded.
14 profiles are loaded.
14 profiles are in enforce mode.
/usr/bin/man
/usr/lib/NetworkManager/nm-dhcp-client.action
/usr/lib/NetworkManager/nm-dhcp-helper
/usr/lib/connman/scripts/dhclient-script
/usr/sbin/chronyd
/{,usr/}sbin/dhclient
cri-containerd.apparmor.d
lsb_release
man_filter
man_groff
nvidia_modprobe
nvidia_modprobe//kmod
tcpdump
test-profile
0 profiles are in complain mode.
0 profiles are in kill mode.
0 profiles are in unconfined mode.
17 processes have profiles defined.
17 processes are in enforce mode.
/usr/local/bin/azure-cns (1727719) cri-containerd.apparmor.d
/opt/cni/bin/azure-vnet-telemetry (1727879) cri-containerd.apparmor.d
/usr/bin/bash (1731109) cri-containerd.apparmor.d
/go/main (1731132) cri-containerd.apparmor.d
/usr/bin/bash (2059349) cri-containerd.apparmor.d
/usr/bin/dash (2060944) cri-containerd.apparmor.d
/usr/sbin/aa-status (2061033) cri-containerd.apparmor.d
/usr/local/bin/install-cni (3045668) cri-containerd.apparmor.d
/csi-node-driver-registrar (3047381) cri-containerd.apparmor.d
/livenessprobe (3047458) cri-containerd.apparmor.d
/bin/secrets-store-csi-driver-provider-azure (3048247) cri-containerd.apparmor.d
/usr/local/bin/cloud-node-manager (3050245) cri-containerd.apparmor.d
/livenessprobe (3050352) cri-containerd.apparmor.d
/csi-node-driver-registrar (3050390) cri-containerd.apparmor.d
/livenessprobe (3051617) cri-containerd.apparmor.d
/csi-node-driver-registrar (3051657) cri-containerd.apparmor.d
/pause (3053937) cri-containerd.apparmor.d
0 processes are in complain mode.
0 processes are unconfined but have a profile defined.
0 processes are in mixed mode.
0 processes are in kill mode.
also profile file looks good to me
# pwd
/etc/apparmor.d
# ls -lrt
total 56
-rw-r--r-- 1 root root 1592 Nov 16 2021 usr.sbin.rsyslogd
drwxr-xr-x 2 root root 4096 Dec 30 2021 force-complain
-rw-r--r-- 1 root root 2628 Feb 8 2022 usr.sbin.chronyd
-rw-r--r-- 1 root root 3448 Mar 17 2022 usr.bin.man
-rw-r--r-- 1 root root 1189 Oct 19 2022 nvidia_modprobe
-rw-r--r-- 1 root root 1339 Oct 19 2022 lsb_release
-rw-r--r-- 1 root root 3500 Jan 31 2023 sbin.dhclient
-rw-r--r-- 1 root root 1518 Feb 10 2023 usr.bin.tcpdump
drwxr-xr-x 2 root root 4096 Oct 4 02:09 disable
drwxr-xr-x 2 root root 4096 Oct 4 02:09 abi
drwxr-xr-x 4 root root 4096 Oct 4 02:09 abstractions
drwxr-xr-x 5 root root 4096 Oct 4 02:09 tunables
drwxr-xr-x 2 root root 4096 Oct 4 20:07 local
-rw-r--r-- 1 root root 162 Dec 8 11:10 test-profile
# cat test-profile
#include <tunables/global>
profile test-profile flags=(attach_disconnected) {
#include <abstractions/base>
file,
# Deny all file writes.
deny /** w,
}
may be related to this https://github.com/MicrosoftDocs/azure-docs/issues/114123
The Kubernetes project currently lacks enough contributors to adequately respond to all issues.
This bot triages un-triaged issues according to the following rules:
lifecycle/stale
is appliedlifecycle/stale
was applied, lifecycle/rotten
is appliedlifecycle/rotten
was applied, the issue is closedYou can:
/remove-lifecycle stale
/close
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle stale
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues.
This bot triages un-triaged issues according to the following rules:
lifecycle/stale
is appliedlifecycle/stale
was applied, lifecycle/rotten
is appliedlifecycle/rotten
was applied, the issue is closedYou can:
/remove-lifecycle rotten
/close
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle rotten
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.
This bot triages issues according to the following rules:
lifecycle/stale
is appliedlifecycle/stale
was applied, lifecycle/rotten
is appliedlifecycle/rotten
was applied, the issue is closedYou can:
/reopen
/remove-lifecycle rotten
Please send feedback to sig-contributor-experience at kubernetes/community.
/close not-planned
@k8s-triage-robot: Closing this issue, marking it as "Not Planned".
As AppArmor not in GA so not sure if it is BUG or feature request
What happened:
AppArmor does not work on AKS per the steps in installation manual. I have tested this on vanilla Kubernetes 1.28.2 and AKS 1.27.3
What you expected to happen:
Expect AppArmor in effect
How to reproduce it (as minimally and precisely as possible):
AppArmorProfile.yaml
Deployment.yaml Following tested with localhost/ as below as well as without
No error on writes
Anything else we need to know?:
Environment:
K8S: 1.28.2 AKS: 1.27.3