Closed sybadm closed 12 months ago
@sybadm is the namespace labeled correctly per https://github.com/kubernetes-sigs/security-profiles-operator/blob/main/installation-usage.md#label-namespaces-for-binding-and-recording ?
@sybadm is the namespace labeled correctly per https://github.com/kubernetes-sigs/security-profiles-operator/blob/main/installation-usage.md#label-namespaces-for-binding-and-recording ?
@saschagrunert that was helpful. I'm getting - Unable to find profile in cluster for container ID
libbpf: prog 'sys_enter': relo #3: <byte_off> [382] struct mnt_namespace.ns.inum (0:0:2 @ offset 16)
libbpf: prog 'sys_enter': relo #3: matching candidate #0 <byte_off> [39224] struct mnt_namespace.ns.inum (0:0:2 @ offset 16)
libbpf: prog 'sys_enter': relo #3: patched insn #22 (ALU/ALU64) imm 16 -> 16
I1205 09:37:50.594629 3386371 bpfrecorder.go:420] "Getting bpf program sys_enter" logger="bpf-recorder"
I1205 09:37:50.594674 3386371 bpfrecorder.go:426] "Attaching bpf tracepoint" logger="bpf-recorder"
I1205 09:37:50.595077 3386371 bpfrecorder.go:431] "Getting syscalls map" logger="bpf-recorder"
I1205 09:37:50.595184 3386371 bpfrecorder.go:437] "Getting pid_mntns map" logger="bpf-recorder"
I1205 09:37:50.596711 3386371 bpfrecorder.go:461] "Module successfully loaded" logger="bpf-recorder"
I1205 09:37:50.596740 3386371 bpfrecorder.go:785] "Unloading bpf module" logger="bpf-recorder"
I1205 09:37:50.612239 3386371 bpfrecorder.go:193] "Starting GRPC API server" logger="bpf-recorder"
libbpf: prog 'sys_enter': relo #3: <byte_off> [382] struct mnt_namespace.ns.inum (0:0:2 @ offset 16)
libbpf: prog 'sys_enter': relo #3: matching candidate #0 <byte_off> [39224] struct mnt_namespace.ns.inum (0:0:2 @ offset 16)
libbpf: prog 'sys_enter': relo #3: patched insn #22 (ALU/ALU64) imm 16 -> 16
I1205 09:38:26.010678 1723838 bpfrecorder.go:420] "Getting bpf program sys_enter" logger="bpf-recorder"
I1205 09:38:26.010714 1723838 bpfrecorder.go:426] "Attaching bpf tracepoint" logger="bpf-recorder"
I1205 09:38:26.011285 1723838 bpfrecorder.go:431] "Getting syscalls map" logger="bpf-recorder"
I1205 09:38:26.011354 1723838 bpfrecorder.go:437] "Getting pid_mntns map" logger="bpf-recorder"
I1205 09:38:26.013397 1723838 bpfrecorder.go:461] "Module successfully loaded" logger="bpf-recorder"
I1205 09:38:26.013440 1723838 bpfrecorder.go:785] "Unloading bpf module" logger="bpf-recorder"
I1205 09:38:26.036121 1723838 bpfrecorder.go:193] "Starting GRPC API server" logger="bpf-recorder"
I1205 10:23:42.912760 3943515 bpfrecorder.go:697] "Looking up container ID in cluster" logger="bpf-recorder" id="d64cc21d8de45c85afea7d873d3fc95eb598531284f6095e4c7f6ddb061a5c49" try=16
I1205 10:23:43.159794 3943515 bpfrecorder.go:697] "Looking up container ID in cluster" logger="bpf-recorder" id="d64cc21d8de45c85afea7d873d3fc95eb598531284f6095e4c7f6ddb061a5c49" try=16
I1205 10:23:44.852250 3943515 bpfrecorder.go:697] "Looking up container ID in cluster" logger="bpf-recorder" id="d64cc21d8de45c85afea7d873d3fc95eb598531284f6095e4c7f6ddb061a5c49" try=17
I1205 10:23:45.080185 3943515 bpfrecorder.go:697] "Looking up container ID in cluster" logger="bpf-recorder" id="d64cc21d8de45c85afea7d873d3fc95eb598531284f6095e4c7f6ddb061a5c49" try=17
I1205 10:23:47.148089 3943515 bpfrecorder.go:697] "Looking up container ID in cluster" logger="bpf-recorder" id="d64cc21d8de45c85afea7d873d3fc95eb598531284f6095e4c7f6ddb061a5c49" try=18
I1205 10:23:47.369228 3943515 bpfrecorder.go:697] "Looking up container ID in cluster" logger="bpf-recorder" id="d64cc21d8de45c85afea7d873d3fc95eb598531284f6095e4c7f6ddb061a5c49" try=18
I1205 10:23:49.870973 3943515 bpfrecorder.go:697] "Looking up container ID in cluster" logger="bpf-recorder" id="d64cc21d8de45c85afea7d873d3fc95eb598531284f6095e4c7f6ddb061a5c49" try=19
E1205 10:23:49.926346 3943515 bpfrecorder.go:630] "Unable to find profile in cluster for container ID" err="searching container ID d64cc21d8de45c85afea7d873d3fc95eb598531284f6095e4c7f6ddb061a5c49: wait on retry: timed out waiting for the condition" logger="bpf-recorder" id="d64cc21d8de45c85afea7d873d3fc95eb598531284f6095e4c7f6ddb061a5c49" pid=4013476 mntns=4026533575
I1205 10:23:50.083975 3943515 bpfrecorder.go:697] "Looking up container ID in cluster" logger="bpf-recorder" id="d64cc21d8de45c85afea7d873d3fc95eb598531284f6095e4c7f6ddb061a5c49" try=19
E1205 10:23:50.150302 3943515 bpfrecorder.go:630] "Unable to find profile in cluster for container ID" err="searching container ID d64cc21d8de45c85afea7d873d3fc95eb598531284f6095e4c7f6ddb061a5c49: wait on retry: timed out waiting for the condition" logger="bpf-recorder" id="d64cc21d8de45c85afea7d873d3fc95eb598531284f6095e4c7f6ddb061a5c49" pid=4013477 mntns=4026533575
libbpf: prog 'sys_enter': relo #3: <byte_off> [382] struct mnt_namespace.ns.inum (0:0:2 @ offset 16)
I still don't see secompprofile: test-recording
$ cat test-profile-recording.yaml
apiVersion: security-profiles-operator.x-k8s.io/v1alpha1
kind: ProfileRecording
metadata:
name: test-recording
namespace: default
spec:
kind: SeccompProfile
recorder: bpf
podSelector:
matchLabels:
app: my-app
$ kubectl get seccompprofile -o wide -A
NAMESPACE NAME STATUS AGE LOCALHOSTPROFILE
default log Installed 81m operator/default/log.json
default test-recording-alpine Installed 12m operator/default/test-recording-alpine.json
security-profiles-operator log-enricher-trace Installed 83m operator/security-profiles-operator/log-enricher-trace.json
@sybadm tested it on AKS now, and it should work when the nginx pod is running for a couple of seconds healthy. After removal, the profile test-recording-nginx
should be installed.
I'm not sure but you may have to do this as well: https://github.com/kubernetes-sigs/security-profiles-operator/blob/main/installation-usage.md#installation-on-aks
I'm not sure but you may have to do this as well: https://github.com/kubernetes-sigs/security-profiles-operator/blob/main/installation-usage.md#installation-on-aks
you are absolutely gem! I thought I had this applied but may be I forgot when I re-installed it. Recording works now!
I believe documentation need some polishing.
Is there anyway to keep recording enabled without deleting the pods to flush the recording info into SPOD log.
Ta
Is there anyway to keep recording enabled without deleting the pods to flush the recording info into SPOD log.
Not right now, we have no further trigger implemented yet. Might be a good feature request, though.
ta
Any advice on this... Sorry, I know I should no use the thread for discussion but there is should be option to have discussions
$ kubectl -n security-profiles-operator patch spod spod --type=merge -p '{"spec":{"apparmorenabled":"true"}}'
Warning: unknown field "spec.apparmorenabled"
securityprofilesoperatordaemon.security-profiles-operator.x-k8s.io/spod patched (no change)
@sybadm can you try enableAppArmor
? This may be a doc bug.
@sybadm can you try
enableAppArmor
? This may be a doc bug.
you again made my day!
$ kubectl -n security-profiles-operator patch spod spod --type=merge -p '{"spec":{"enableAppArmor":"true"}}'
The SecurityProfilesOperatorDaemon "spod" is invalid: spec.enableAppArmor: Invalid value: "string": spec.enableAppArmor in body must be of type boolean: "string"
$ kubectl -n security-profiles-operator patch spod spod --type=merge -p '{"spec":{"enableAppArmor":true}}'
securityprofilesoperatordaemon.security-profiles-operator.x-k8s.io/spod patched
@sybadm do you want to provide a doc update PR or should I take care of that?
@sybadm do you want to provide a doc update PR or should I take care of that?
will do there are few correction ... I will consolidate all in one
Sorry, I'm back again, do not want to open new thread for this. AppArmorProfile does not get into action
$ cat ap.yaml
---
apiVersion: security-profiles-operator.x-k8s.io/v1alpha1
kind: AppArmorProfile
metadata:
name: test-profile
annotations:
description: Block writing to any files in the disk.
spec:
policy: |
#include <tunables/global>
profile test-profile flags=(attach_disconnected) {
#include <abstractions/base>
file,
# Deny all file writes.
deny /** w,
}
$ cat dep,yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: test-pod
annotations:
container.apparmor.security.beta.kubernetes.io/test-container: localhost/test-profile
spec:
replicas: 1
selector:
matchLabels:
app: nginx
strategy: {}
template:
metadata:
creationTimestamp: null
labels:
app: nginx
spec:
containers:
- image: nginx
name: test-container
$ kubectl get apparmorprofile -o yaml
apiVersion: v1
items:
- apiVersion: security-profiles-operator.x-k8s.io/v1alpha1
kind: AppArmorProfile
metadata:
annotations:
description: Block writing to any files in the disk.
kubectl.kubernetes.io/last-applied-configuration: |
{"apiVersion":"security-profiles-operator.x-k8s.io/v1alpha1","kind":"AppArmorProfile","metadata":{"annotations":{"description":"Block writing to any files in the disk."},"name":"test-profile","namespace":"default"},"spec":{"policy":"#include \u003ctunables/global\u003e\n\nprofile test-profile flags=(attach_disconnected) {\n #include \u003cabstractions/base\u003e\n\n file,\n\n # Deny all file writes.\n deny /** w,\n}\n"}}
creationTimestamp: "2023-12-05T16:56:42Z"
finalizers:
- aks-i3mrpsgenp-18055575-vmss00000c-deleted
- aks-systempool-32724526-vmss000007-deleted
- aks-systempool-32724526-vmss00001j-deleted
- aks-i2mrpsge2np-16288669-vmss0000fl-deleted
- aks-sharednp-50716919-vmss000000-deleted
- aks-systempool-32724526-vmss00000m-deleted
- aks-i2mrpsge2np-16288669-vmss0000fk-deleted
- aks-i2mrdashnp-23170901-vmss000009-deleted
- aks-sharednp-50716919-vmss000009-deleted
- aks-sharednp-50716919-vmss000003-deleted
- aks-systempool-32724526-vmss00001i-deleted
generation: 1
labels:
spo.x-k8s.io/profile-id: AppArmorProfile-test-profile
name: test-profile
namespace: default
resourceVersion: "59596250"
uid: ccc36c3c-6523-4138-afc7-a7af417971a0
spec:
policy: |
#include <tunables/global>
profile test-profile flags=(attach_disconnected) {
#include <abstractions/base>
file,
# Deny all file writes.
deny /** w,
}
kind: List
metadata:
resourceVersion: ""
$ kubectl exec -it test-pod-6964545f4c-75gn4 -- bash
root@test-pod-6964545f4c-75gn4:/# touch abc
root@test-pod-6964545f4c-75gn4:/# rm abc
root@test-pod-6964545f4c-75gn4:/# exit
exit
@sybadm do you want to provide a doc update PR or should I take care of that?
I'm getting, Pull request creation failed. Validation failed: must be a collaborator
@shysank can we create a new issue for that, @pjbgf may have some insights here as well.
I'm having hard time setting up eBPF recoding profiles on AKS 1.27.3
Although all SPOD's running normal with no errors I do not see the commands are getting recorded into security-profiles-operator logs --selector name=spod -c bpf-recorder
What happened:
Current SPOD settings,
test-profile-recording.yaml
test-pod.yaml
Logs:
What you expected to happen:
Expect the ePBF recording in the bpf-recorder containers log
How to reproduce it (as minimally and precisely as possible):
Steps above
Anything else we need to know?:
Environment:
cat /etc/os-release
):uname -a
):