Closed Jeroen0494 closed 1 year ago
Can you paste how does the mariadb-0
pod manifest looks like (kubectl get -oyaml
)?
This is the code that crashes:
// pod is being created or updated so ensure it is linked to a seccomp/selinux profile
for _, profileIndex := range getSeccompProfilesFromPod(pod) {
profileElements := strings.Split(profileIndex, "/")
profileNamespace := profileElements[1]
so it looks like there are some seccomp profiles that don't have a slash in their name. Based on the version you are running I suspect it's RuntimeDefault
but I wanted to confirm.
Hi,
Thank you for the response!
I'm running a custom seccomp profile I created myself, for MariaDB, Nextcloud, NGINX, Redis, and a couple of others.
Here is the output:
apiVersion: v1
kind: Pod
metadata:
annotations:
cni.projectcalico.org/containerID: 32f91c52d244f76d7fcc61d76281556eaf201a74f3fd52653a24d76bc162b930
cni.projectcalico.org/podIP: 10.233.105.165/32
cni.projectcalico.org/podIPs: 10.233.105.165/32
seccomp.security.alpha.kubernetes.io/pod: localhost/mariadb-seccomp-profile.json
creationTimestamp: "2022-06-08T16:16:29Z"
generateName: mariadb-
labels:
app: mariadb
controller-revision-hash: mariadb-59784c597d
statefulset.kubernetes.io/pod-name: mariadb-0
name: mariadb-0
namespace: nextcloud
ownerReferences:
- apiVersion: apps/v1
blockOwnerDeletion: true
controller: true
kind: StatefulSet
name: mariadb
uid: cfe8c01c-296e-4ad2-a00f-d428910a40c9
resourceVersion: "2170564"
uid: 4d414974-4603-4fce-8b79-e28ed1dec960
spec:
containers:
- env:
- name: BITNAMI_DEBUG
value: "false"
- name: MARIADB_SKIP_TEST_DB
value: "yes"
- name: MARIADB_ROOT_PASSWORD
valueFrom:
secretKeyRef:
key: mariadb-root-password
name: mariadb-secret
- name: MARIADB_USER
value: nextcloud
- name: MARIADB_PASSWORD
valueFrom:
secretKeyRef:
key: mariadb-password
name: mariadb-secret
- name: MARIADB_DATABASE
value: nextcloud
image: docker.io/bitnami/mariadb:10.7.3
imagePullPolicy: IfNotPresent
livenessProbe:
exec:
command:
- /bin/bash
- -ec
- |
password_aux="${MARIADB_ROOT_PASSWORD:-}"
if [[ -f "${MARIADB_ROOT_PASSWORD_FILE:-}" ]]; then
password_aux=$(cat "$MARIADB_ROOT_PASSWORD_FILE")
fi
mysqladmin status -uroot -p"${password_aux}"
failureThreshold: 3
initialDelaySeconds: 120
periodSeconds: 10
successThreshold: 1
timeoutSeconds: 1
name: mariadb
ports:
- containerPort: 3306
name: mysql
protocol: TCP
readinessProbe:
exec:
command:
- /bin/bash
- -ec
- |
password_aux="${MARIADB_ROOT_PASSWORD:-}"
if [[ -f "${MARIADB_ROOT_PASSWORD_FILE:-}" ]]; then
password_aux=$(cat "$MARIADB_ROOT_PASSWORD_FILE")
fi
mysqladmin status -uroot -p"${password_aux}"
failureThreshold: 3
initialDelaySeconds: 30
periodSeconds: 10
successThreshold: 1
timeoutSeconds: 1
resources:
requests:
cpu: 50m
memory: 350Mi
securityContext:
allowPrivilegeEscalation: false
capabilities:
drop:
- ALL
privileged: false
runAsNonRoot: true
runAsUser: 1001
terminationMessagePath: /dev/termination-log
terminationMessagePolicy: File
volumeMounts:
- mountPath: /bitnami/mariadb/data/
name: mariadb-data
- mountPath: /bitnami/mariadb/logs/
name: mariadb-logs
- mountPath: /opt/bitnami/mariadb/conf/my_custom.cnf
name: mariadb-config
subPath: my_custom.cnf
dnsPolicy: ClusterFirst
enableServiceLinks: true
hostname: mariadb-0
nodeName: mediaserver.fritz.box
preemptionPolicy: PreemptLowerPriority
priority: 0
restartPolicy: Always
schedulerName: default-scheduler
securityContext:
fsGroup: 1001
seccompProfile:
localhostProfile: mariadb-seccomp-profile.json
type: Localhost
serviceAccount: mariadb
serviceAccountName: mariadb
subdomain: mariadb
terminationGracePeriodSeconds: 30
tolerations:
- effect: NoExecute
key: node.kubernetes.io/not-ready
operator: Exists
tolerationSeconds: 300
- effect: NoExecute
key: node.kubernetes.io/unreachable
operator: Exists
tolerationSeconds: 300
volumes:
- configMap:
defaultMode: 420
name: mariadb-config
name: mariadb-config
- name: mariadb-data
persistentVolumeClaim:
claimName: mariadb-data
- name: mariadb-logs
persistentVolumeClaim:
claimName: mariadb-logs
status:
conditions:
- lastProbeTime: null
lastTransitionTime: "2022-06-08T16:16:29Z"
status: "True"
type: Initialized
- lastProbeTime: null
lastTransitionTime: "2022-06-10T07:53:58Z"
status: "True"
type: Ready
- lastProbeTime: null
lastTransitionTime: "2022-06-10T07:53:58Z"
status: "True"
type: ContainersReady
- lastProbeTime: null
lastTransitionTime: "2022-06-08T16:16:29Z"
status: "True"
type: PodScheduled
containerStatuses:
- containerID: containerd://6a6564dcf72b5337ce92d7af55da767d13f23ba3038340317b34c28738534f08
image: docker.io/bitnami/mariadb:10.7.3
imageID: docker.io/bitnami/mariadb@sha256:048de8cb8fae9d98367119be69d973a73337633eef32135fc541ea38f2ef1e4c
lastState:
terminated:
containerID: containerd://413fda1d9513e768a67e557935eb55783374f1a24effbd48a6196c02d37897fe
exitCode: 255
finishedAt: "2022-06-10T07:45:37Z"
reason: Unknown
startedAt: "2022-06-09T07:49:32Z"
name: mariadb
ready: true
restartCount: 3
started: true
state:
running:
startedAt: "2022-06-10T07:53:20Z"
hostIP: 192.168.178.43
phase: Running
podIP: 10.233.105.165
podIPs:
- ip: 10.233.105.165
qosClass: Burstable
startTime: "2022-06-08T16:16:29Z"
And here is my seccomp profile, which is stored under /var/lib/kubelet/seccomp/mariadb-seccomp-profile.json
{
"defaultAction": "SCMP_ACT_LOG",
"architectures": [
"SCMP_ARCH_X86_64"
],
"syscalls": [
{
"names": [
"getuid",
"getgid",
"geteuid",
"getegid",
"setpgid",
"getpgrp",
"rt_sigsuspend",
"rt_sigreturn",
"pipe",
"timer_create",
"timer_settime",
"getcwd",
"sysinfo",
"times",
"munmap",
"rt_sigprocmask",
"ioctl",
"access",
"set_tid_address",
"set_robust_list",
"prlimit64",
"dup",
"fcntl",
"mmap",
"pread64",
"pwrite64",
"gettid",
"sched_yield",
"accept4",
"pipe2",
"dup2",
"getpid",
"socket",
"connect",
"sendto",
"recvfrom",
"setsockopt",
"clone",
"wait4",
"poll",
"getcwd",
"lseek",
"unlink",
"readlink",
"sysinfo",
"madvise",
"getpeername",
"geteuid",
"rt_sigtimedwait",
"sched_getaffinity",
"mremap",
"fallocate",
"dup3",
"bind",
"listen",
"fsync",
"fdatasync",
"rename",
"umask",
"rt_sigreturn",
"select",
"tgkill",
"recvmsg",
"getsockname",
"ftruncate",
"chmod",
"chown"
],
"action": "SCMP_ACT_ALLOW"
},
{
"names": [
"arch_prctl",
"brk",
"capget",
"capset",
"chdir",
"close",
"execve",
"exit",
"exit_group",
"fstat",
"fstatfs",
"futex",
"getdents64",
"getppid",
"lstat",
"mprotect",
"nanosleep",
"newfstatat",
"openat",
"prctl",
"read",
"rt_sigaction",
"statfs",
"setgid",
"setgroups",
"setuid",
"stat",
"uname",
"write"
],
"action": "SCMP_ACT_ALLOW"
},
{
"names": [
"acct",
"add_key",
"bpf",
"clock_adjtime",
"clock_settime",
"create_module",
"delete_module",
"finit_module",
"get_kernel_syms",
"get_mempolicy",
"init_module",
"ioperm",
"iopl",
"kcmp",
"kexec_file_load",
"kexec_load",
"keyctl",
"lookup_dcookie",
"mbind",
"mount",
"move_pages",
"name_to_handle_at",
"nfsservctl",
"open_by_handle_at",
"perf_event_open",
"personality",
"pivot_root",
"process_vm_readv",
"process_vm_writev",
"ptrace",
"query_module",
"quotactl",
"reboot",
"request_key",
"set_mempolicy",
"setns",
"settimeofday",
"stime",
"swapoff",
"swapon",
"_sysctl",
"sysfs",
"umount2",
"umount",
"unshare",
"uselib",
"userfaultfd",
"ustat",
"vm86old",
"vm86"
],
"action": "SCMP_ACT_ERRNO"
}
]
}
$ sudo ls /var/lib/kubelet/seccomp/
mariadb-seccomp-profile.json profile-complain-block-high-risk.json transmission-seccomp-profile.json
nextcloud-seccomp-profile.json profile-complain-unsafe.json
nginx-seccomp-profile.json redis-seccomp-profile.json
So basically I need to move all of my profiles to a folder, and then it'll work?
If I move the seccomp profile to a folder:
seccompProfile:
type: Localhost
localhostProfile: nextcloud/redis-seccomp-profile.json
Nothing really changes. This time it's Redis, because I patched that one first.
I0610 17:50:30.331382 1 controller.go:117] "msg"="Observed a panic in reconciler: runtime error: index out of range [2] with length 2" "controller"="pods" "controllerGroup"="" "controllerKind"="Pod" "name"="redis-master-0" "namespace"="nextcloud" "pod"={"name":"redis-master-0","namespace":"nextcloud"} "reconcileID"="7131aa4c-6d02-4241-a4b5-b57c6dda9716"
panic: runtime error: index out of range [2] with length 2 [recovered]
panic: runtime error: index out of range [2] with length 2
goroutine 298 [running]:
sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).Reconcile.func1()
sigs.k8s.io/controller-runtime@v0.12.1/pkg/internal/controller/controller.go:118 +0x1f4
panic({0x1b05880, 0xc0006eedc8})
runtime/panic.go:838 +0x207
sigs.k8s.io/security-profiles-operator/internal/pkg/manager/workloadannotator.(*PodReconciler).Reconcile(0xc000721a80, {0xc00037a800?, 0xc000741950?}, {{{0xc00062fbe0?, 0x10?}, {0xc00062fbc0?, 0x40f3e7?}}})
sigs.k8s.io/security-profiles-operator/internal/pkg/manager/workloadannotator/workloadannotator.go:123 +0xfe5
sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).Reconcile(0x1f3be68?, {0x1f3bf10?, 0xc000741950?}, {{{0xc00062fbe0?, 0x1b3c160?}, {0xc00062fbc0?, 0x404ad4?}}})
sigs.k8s.io/controller-runtime@v0.12.1/pkg/internal/controller/controller.go:121 +0xc8
sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).reconcileHandler(0xc00030c500, {0x1f3be68, 0xc000721640}, {0x1a3c9c0?, 0xc0003fef40?})
sigs.k8s.io/controller-runtime@v0.12.1/pkg/internal/controller/controller.go:320 +0x33c
sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).processNextWorkItem(0xc00030c500, {0x1f3be68, 0xc000721640})
sigs.k8s.io/controller-runtime@v0.12.1/pkg/internal/controller/controller.go:273 +0x1d9
sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).Start.func2.2()
sigs.k8s.io/controller-runtime@v0.12.1/pkg/internal/controller/controller.go:234 +0x85
created by sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).Start.func2
sigs.k8s.io/controller-runtime@v0.12.1/pkg/internal/controller/controller.go:230 +0x325
# find /var/lib/kubelet/seccomp/
/var/lib/kubelet/seccomp/
/var/lib/kubelet/seccomp/transmission
/var/lib/kubelet/seccomp/transmission/transmission-seccomp-profile.json
/var/lib/kubelet/seccomp/nzbget
/var/lib/kubelet/seccomp/default
/var/lib/kubelet/seccomp/default/profile-complain-block-high-risk.json
/var/lib/kubelet/seccomp/default/profile-complain-unsafe.json
/var/lib/kubelet/seccomp/nextcloud
/var/lib/kubelet/seccomp/nextcloud/mariadb-seccomp-profile.json
/var/lib/kubelet/seccomp/nextcloud/nginx-seccomp-profile.json
/var/lib/kubelet/seccomp/nextcloud/nextcloud-seccomp-profile.json
/var/lib/kubelet/seccomp/nextcloud/redis-seccomp-profile.json
@jhrozek do you have a fix in mind? Even without fixing the root cause we could consider adding a check to the indexing:
I decided to do it the proper way this time, so I removed all seccomp profiles from my deployments and deployed them via YAML's. Unfortunately, it doesn't work for different reasons.
XXXXX@mediaserver:~/Kubernetes/k3s/nextcloud/nextcloud$ k get SeccompProfile -n nextcloud -o yaml
apiVersion: v1
items:
- apiVersion: security-profiles-operator.x-k8s.io/v1beta1
kind: SeccompProfile
metadata:
annotations:
description: Enables complain mode whilst blocking high-risk syscalls. Some
essential syscalls are allowed to decrease log noise.
creationTimestamp: "2022-06-09T13:03:54Z"
finalizers:
- mediaserver.fritz.box-delete
generation: 1
labels:
spo.x-k8s.io/profile-id: SeccompProfile-mariadb-seccomp-profile
name: mariadb-seccomp-profile
namespace: nextcloud
resourceVersion: "2412325"
uid: b12fd9a9-4c7e-4ca5-8524-5c0955341d1b
spec:
architectures:
- SCMP_ARCH_X86_64
defaultAction: SCMP_ACT_LOG
syscalls:
- action: SCMP_ACT_ALLOW
names:
- getuid
- getgid
- geteuid
- getegid
- setpgid
- getpgrp
- rt_sigsuspend
- rt_sigreturn
- pipe
- timer_create
- timer_settime
- getcwd
- sysinfo
- times
- munmap
- rt_sigprocmask
- ioctl
- access
- set_tid_address
- set_robust_list
- prlimit64
- dup
- fcntl
- mmap
- pread64
- pwrite64
- gettid
- sched_yield
- accept4
- pipe2
- dup2
- getpid
- socket
- connect
- sendto
- recvfrom
- setsockopt
- clone
- wait4
- poll
- getcwd
- lseek
- unlink
- readlink
- sysinfo
- madvise
- getpeername
- geteuid
- rt_sigtimedwait
- sched_getaffinity
- mremap
- fallocate
- dup3
- bind
- listen
- fsync
- fdatasync
- rename
- umask
- rt_sigreturn
- select
- tgkill
- recvmsg
- getsockname
- ftruncate
- chmod
- chown
- action: SCMP_ACT_ALLOW
names:
- arch_prctl
- [...]
- action: SCMP_ACT_ERRNO
names:
- acct
- [...]
status:
conditions:
- lastTransitionTime: "2022-06-15T15:15:43Z"
reason: Available
status: "True"
type: Ready
localhostProfile: operator/nextcloud/mariadb-seccomp-profile.json
status: Installed
- apiVersion: security-profiles-operator.x-k8s.io/v1beta1
kind: SeccompProfile
metadata:
annotations:
description: Enables complain mode whilst blocking high-risk syscalls. Some
essential syscalls are allowed to decrease log noise.
creationTimestamp: "2022-06-09T13:04:02Z"
finalizers:
- mediaserver.fritz.box-delete
generation: 1
labels:
spo.x-k8s.io/profile-id: SeccompProfile-redis-seccomp-profile
name: redis-seccomp-profile
namespace: nextcloud
resourceVersion: "2412326"
uid: 71fa8b6b-e15e-4f07-a2a3-af7dc61f7732
spec:
architectures:
- SCMP_ARCH_X86_64
defaultAction: SCMP_ACT_LOG
syscalls:
- action: SCMP_ACT_ALLOW
names:
- rt_sigreturn
- access
- epoll_create
- pipe
- getrandom
- listen
- clone
- fdatasync
- getcwd
- umask
- sysinfo
- fdatasync
- munmap
- rt_sigprocmask
- ioctl
- open
- set_tid_address
- epoll_wait
- epoll_ctl
- set_robust_list
- madvise
- prlimit64
- getpid
- socket
- connect
- accept
- sendto
- recvfrom
- recvmsg
- bind
- getsockname
- setsockopt
- poll
- fcntl
- lseek
- readlink
- mmap
- action: SCMP_ACT_ALLOW
names:
- arch_prctl
- [...]
- action: SCMP_ACT_ERRNO
names:
- acct
- [...]
status:
conditions:
- lastTransitionTime: "2022-06-15T15:15:43Z"
reason: Available
status: "True"
type: Ready
localhostProfile: operator/nextcloud/redis-seccomp-profile.json
status: Installed
- apiVersion: security-profiles-operator.x-k8s.io/v1beta1
kind: SeccompProfile
metadata:
annotations:
description: Enables complain mode whilst blocking high-risk syscalls. Some
essential syscalls are allowed to decrease log noise.
creationTimestamp: "2022-06-09T13:03:38Z"
finalizers:
- mediaserver.fritz.box-delete
generation: 1
labels:
spo.x-k8s.io/profile-id: SeccompProfile-nextcloud-seccomp-profile
name: nextcloud-seccomp-profile
namespace: nextcloud
resourceVersion: "2412329"
uid: e1491929-7449-4dc7-a989-20b1cfb432ef
spec:
architectures:
- SCMP_ARCH_X86_64
defaultAction: SCMP_ACT_LOG
syscalls:
- action: SCMP_ACT_ALLOW
names:
- times
- getgid
- getuid
- geteuid
- getegid
- setsid
- rt_sigreturn
- pread64
- gettid
- writev
- sched_getaffinity
- set_tid_address
- pipe
- epoll_ctl
- mremap
- utimensat
- madvise
- epoll_create1
- prlimit64
- dup
- membarrier
- dup2
- listen
- getsockname
- getpeername
- socketpair
- clone
- fork
- wait4
- kill
- readlink
- chmod
- fchmod
- sysinfo
- munmap
- ioctl
- readv
- rt_sigprocmask
- pread64
- pwrite64
- open
- access
- epoll_pwait
- getrandom
- setitimer
- getpid
- socket
- connect
- accept
- sendto
- recvfrom
- shutdown
- bind
- setsockopt
- getsockopt
- poll
- fcntl
- flock
- getcwd
- lseek
- unlink
- mmap
- umask
- action: SCMP_ACT_ALLOW
names:
- arch_prctl
- [...]
- action: SCMP_ACT_ERRNO
names:
- acct
- [...]
status:
conditions:
- lastTransitionTime: "2022-06-15T15:15:43Z"
reason: Available
status: "True"
type: Ready
localhostProfile: operator/nextcloud/nextcloud-seccomp-profile.json
status: Installed
- apiVersion: security-profiles-operator.x-k8s.io/v1beta1
kind: SeccompProfile
metadata:
annotations:
description: Enables complain mode whilst blocking high-risk syscalls. Some
essential syscalls are allowed to decrease log noise.
creationTimestamp: "2022-06-09T13:03:45Z"
finalizers:
- mediaserver.fritz.box-delete
generation: 1
labels:
spo.x-k8s.io/profile-id: SeccompProfile-nginx-seccomp-profile
name: nginx-seccomp-profile
namespace: nextcloud
resourceVersion: "2412330"
uid: b6762f88-5a39-41ac-8e5e-e62b4eeeb15e
spec:
architectures:
- SCMP_ARCH_X86_64
defaultAction: SCMP_ACT_LOG
syscalls:
- action: SCMP_ACT_ALLOW
names:
- getuid
- getgid
- geteuid
- getegid
- munmap
- rt_sigsuspend
- rt_sigprocmask
- rt_sigreturn
- ioctl
- pread64
- pwrite64
- gettid
- readv
- open
- writev
- sched_getaffinity
- io_setup
- set_tid_address
- epoll_ctl
- madvise
- epoll_pwait
- accept4
- eventfd2
- epoll_create1
- prlimit64
- dup2
- getpid
- socket
- connect
- recvfrom
- sendfile
- sendmsg
- recvmsg
- shutdown
- bind
- listen
- getsockname
- socketpair
- setsockopt
- getsockopt
- fork
- wait4
- fcntl
- mkdir
- unlink
- mmap
- chown
- action: SCMP_ACT_ALLOW
names:
- arch_prctl
- [...]
- action: SCMP_ACT_ERRNO
names:
- acct
- [...]
status:
conditions:
- lastTransitionTime: "2022-06-15T15:15:43Z"
reason: Available
status: "True"
type: Ready
localhostProfile: operator/nextcloud/nginx-seccomp-profile.json
status: Installed
kind: List
metadata:
resourceVersion: ""
selfLink: ""
XXXXX@mediaserver:~/Kubernetes/k3s/nextcloud/nextcloud$ k get ProfileBinding -n nextcloud -o yaml
apiVersion: v1
items:
- apiVersion: security-profiles-operator.x-k8s.io/v1alpha1
kind: ProfileBinding
metadata:
creationTimestamp: "2022-06-15T15:30:14Z"
generation: 1
name: nextcloud-profile-binding
namespace: nextcloud
resourceVersion: "2415186"
uid: 7b247f2b-2a7e-4924-b2b0-ecd746475ad8
spec:
image: nextcloud:23.0.5-fpm-alpine
profileRef:
kind: SeccompProfile
name: nextcloud-seccomp-profile.json
status: {}
- apiVersion: security-profiles-operator.x-k8s.io/v1alpha1
kind: ProfileBinding
metadata:
generation: 1
name: nginx-profile-binding
namespace: nextcloud
resourceVersion: "2415187"
uid: e7a2cca7-e19e-4713-8558-f62afacdda2d
spec:
image: nginx:1.22.0-alpine
profileRef:
kind: SeccompProfile
name: nginx-seccomp-profile.json
status: {}
kind: List
metadata:
resourceVersion: ""
selfLink: ""
XXXXX@mediaserver:~/Kubernetes/k3s/nextcloud/nextcloud$
XXXXX@mediaserver:~/Kubernetes/k3s/nextcloud/nextcloud$ k logs -n security-profiles-operator security-profiles-operator-webhook-9f44dd59c-wq7c9
I0615 15:15:06.145328 1 logr.go:261] "msg"="Set logging verbosity to 0"
I0615 15:15:06.145419 1 logr.go:261] "msg"="Profiling support enabled: false"
I0615 15:15:06.145488 1 logr.go:261] setup "msg"="starting component: security-profiles-operator-webhook" "buildDate"="1980-01-01T00:00:00Z" "buildTags"="netgo,osusergo,seccomp,apparmor" "cgoldFlags"="-lseccomp -lelf -lz -lbpf" "compiler"="gc" "dependencies"="github.com/PuerkitoBio/purell v1.1.1 ,github.com/PuerkitoBio/urlesc v0.0.0-20170810143723-de5bf2ad4578 ,github.com/ReneKroon/ttlcache/v2 v2.11.0 ,github.com/acobaugh/osrelease v0.0.0-20181218015638-a93a0a55a249 ,github.com/aquasecurity/libbpfgo v0.2.5-libbpf-0.7.0 ,github.com/beorn7/perks v1.0.1 ,github.com/blang/semver v3.5.1+incompatible ,github.com/cert-manager/cert-manager v1.8.0 ,github.com/cespare/xxhash/v2 v2.1.2 ,github.com/containers/common v0.48.1-0.20220510094751-400832f41771 ,github.com/cpuguy83/go-md2man/v2 v2.0.1 ,github.com/crossplane/crossplane-runtime v0.16.0 ,github.com/davecgh/go-spew v1.1.1 ,github.com/emicklei/go-restful v2.9.5+incompatible ,github.com/evanphx/json-patch v4.12.0+incompatible ,github.com/fsnotify/fsnotify v1.5.1 ,github.com/go-logr/logr v1.2.3 ,github.com/go-openapi/jsonpointer v0.19.5 ,github.com/go-openapi/jsonreference v0.19.5 ,github.com/go-openapi/swag v0.19.14 ,github.com/gogo/protobuf v1.3.2 ,github.com/golang/groupcache v0.0.0-20210331224755-41bb18bfe9da ,github.com/golang/protobuf v1.5.2 ,github.com/google/gnostic v0.5.7-v3refs ,github.com/google/go-cmp v0.5.6 ,github.com/google/gofuzz v1.2.0 ,github.com/google/uuid v1.3.0 ,github.com/imdario/mergo v0.3.12 ,github.com/josharian/intern v1.0.0 ,github.com/json-iterator/go v1.1.12 ,github.com/mailru/easyjson v0.7.6 ,github.com/matttproud/golang_protobuf_extensions v1.0.2-0.20181231171920-c182affec369 ,github.com/modern-go/concurrent v0.0.0-20180306012644-bacd9c7ef1dd ,github.com/modern-go/reflect2 v1.0.2 ,github.com/munnerz/goautoneg v0.0.0-20191010083416-a7dc8b61c822 ,github.com/nxadm/tail v1.4.8 ,github.com/opencontainers/runtime-spec v1.0.3-0.20210326190908-1c3f411f0417 ,github.com/openshift/api v0.0.0-20220209124712-b632c5fc10c0 ,github.com/pjbgf/go-apparmor v0.0.7 ,github.com/pkg/errors v0.9.1 ,github.com/prometheus-operator/prometheus-operator/pkg/apis/monitoring v0.57.0 ,github.com/prometheus/client_golang v1.12.2 ,github.com/prometheus/client_model v0.2.0 ,github.com/prometheus/common v0.32.1 ,github.com/prometheus/procfs v0.7.3 ,github.com/russross/blackfriday/v2 v2.1.0 ,github.com/seccomp/libseccomp-golang v0.9.2-0.20210429002308-3879420cc921 ,github.com/sirupsen/logrus v1.8.1 ,github.com/spf13/afero v1.8.0 ,github.com/spf13/pflag v1.0.5 ,github.com/urfave/cli/v2 v2.8.1 ,github.com/xrash/smetrics v0.0.0-20201216005158-039620a65673 ,golang.org/x/net v0.0.0-20220225172249-27dd8689420f ,golang.org/x/oauth2 v0.0.0-20211104180415-d3ed0bb246c8 ,golang.org/x/sync v0.0.0-20210220032951-036812b2e83c ,golang.org/x/sys v0.0.0-20220422013727-9388b58f7150 ,golang.org/x/term v0.0.0-20210927222741-03fcf44c2211 ,golang.org/x/text v0.3.7 ,golang.org/x/time v0.0.0-20220210224613-90d013bbcef8 ,gomodules.xyz/jsonpatch/v2 v2.2.0 ,google.golang.org/genproto v0.0.0-20220304144024-325a89244dc8 ,google.golang.org/grpc v1.47.0 ,google.golang.org/protobuf v1.28.0 ,gopkg.in/inf.v0 v0.9.1 ,gopkg.in/tomb.v1 v1.0.0-20141024135613-dd632973f1e7 ,gopkg.in/yaml.v2 v2.4.0 ,gopkg.in/yaml.v3 v3.0.1 ,k8s.io/api v0.24.1 ,k8s.io/apiextensions-apiserver v0.24.0 ,k8s.io/apimachinery v0.24.1 ,k8s.io/client-go v0.24.1 ,k8s.io/component-base v0.24.0 ,k8s.io/klog/v2 v2.60.1 ,k8s.io/kube-openapi v0.0.0-20220328201542-3ee0da9b0b42 ,k8s.io/utils v0.0.0-20220210201930-3a6ce19ff2f9 ,sigs.k8s.io/controller-runtime v0.12.1 ,sigs.k8s.io/gateway-api v0.4.1 ,sigs.k8s.io/json v0.0.0-20211208200746-9f7c6b3444d2 ,sigs.k8s.io/release-utils v0.7.0 ,sigs.k8s.io/structured-merge-diff/v4 v4.2.1 ,sigs.k8s.io/yaml v1.3.0 " "gitCommit"="unknown" "gitCommitDate"="unknown" "gitTreeState"="clean" "goVersion"="go1.18.2" "ldFlags"="-s -w -linkmode external -extldflags \"-static\" -X sigs.k8s.io/security-profiles-operator/internal/pkg/version.buildDate=1980-01-01T00:00:00Z -X sigs.k8s.io/security-profiles-operator/internal/pkg/version.version=0.4.3" "libbpf"="v0.7" "libseccomp"="2.5.3" "platform"="linux/amd64" "version"="0.4.3"
I0615 15:15:06.999176 1 logr.go:261] controller-runtime/metrics "msg"="Metrics server is starting to listen" "addr"=":8080"
I0615 15:15:06.999476 1 logr.go:261] setup "msg"="registering webhooks"
I0615 15:15:06.999621 1 server.go:145] controller-runtime/webhook "msg"="Registering webhook" "path"="/mutate-v1-pod-binding"
I0615 15:15:06.999701 1 server.go:145] controller-runtime/webhook "msg"="Registering webhook" "path"="/mutate-v1-pod-recording"
I0615 15:15:06.999826 1 logr.go:261] setup "msg"="starting webhook"
I0615 15:15:06.999908 1 server.go:213] controller-runtime/webhook/webhooks "msg"="Starting webhook server"
I0615 15:15:07.000152 1 logr.go:261] controller-runtime/certwatcher "msg"="Updated current TLS certificate"
I0615 15:15:07.000227 1 logr.go:261] controller-runtime/webhook "msg"="Serving webhook server" "host"="" "port"=9443
I0615 15:15:07.000384 1 internal.go:362] "msg"="Starting server" "addr"={"IP":"::","Port":8080,"Zone":""} "kind"="metrics" "path"="/metrics"
I0615 15:15:07.000491 1 logr.go:261] controller-runtime/certwatcher "msg"="Starting certificate watcher"
I0615 15:15:07.000643 1 leaderelection.go:248] attempting to acquire leader lease security-profiles-operator/security-profiles-operator-webhook-lock...
I0615 15:17:37.138118 1 leaderelection.go:258] successfully acquired lease security-profiles-operator/security-profiles-operator-webhook-lock
2022/06/15 15:20:03 http: TLS handshake error from 192.168.178.43:50704: EOF
2022/06/15 15:20:03 http: TLS handshake error from 192.168.178.43:50706: EOF
E0615 15:30:35.321472 1 binding.go:167] binding "msg"="failed to get SeccompProfile types.NamespacedName{Namespace:\"nextcloud\", Name:\"nextcloud-seccomp-profile.json\"}" "error"="wait on retry: timed out waiting for the condition"
E0615 15:30:39.399996 1 binding.go:167] binding "msg"="failed to get SeccompProfile types.NamespacedName{Namespace:\"nextcloud\", Name:\"nextcloud-seccomp-profile.json\"}" "error"="wait on retry: timed out waiting for the condition"
E0615 15:30:43.533792 1 binding.go:167] binding "msg"="failed to get SeccompProfile types.NamespacedName{Namespace:\"nextcloud\", Name:\"nextcloud-seccomp-profile.json\"}" "error"="wait on retry: timed out waiting for the condition"
E0615 15:30:47.639475 1 binding.go:167] binding "msg"="failed to get SeccompProfile types.NamespacedName{Namespace:\"nextcloud\", Name:\"nginx-seccomp-profile.json\"}" "error"="wait on retry: timed out waiting for the condition"
E0615 15:30:51.727577 1 binding.go:167] binding "msg"="failed to get SeccompProfile types.NamespacedName{Namespace:\"nextcloud\", Name:\"nextcloud-seccomp-profile.json\"}" "error"="wait on retry: timed out waiting for the condition"
E0615 15:30:55.830077 1 binding.go:167] binding "msg"="failed to get SeccompProfile types.NamespacedName{Namespace:\"nextcloud\", Name:\"nextcloud-seccomp-profile.json\"}" "error"="wait on retry: timed out waiting for the condition"
E0615 15:30:59.920576 1 binding.go:167] binding "msg"="failed to get SeccompProfile types.NamespacedName{Namespace:\"nextcloud\", Name:\"nginx-seccomp-profile.json\"}" "error"="wait on retry: timed out waiting for the condition"
E0615 15:31:03.992019 1 binding.go:167] binding "msg"="failed to get SeccompProfile types.NamespacedName{Namespace:\"nextcloud\", Name:\"nextcloud-seccomp-profile.json\"}" "error"="wait on retry: timed out waiting for the condition"
E0615 15:31:08.097295 1 binding.go:167] binding "msg"="failed to get SeccompProfile types.NamespacedName{Namespace:\"nextcloud\", Name:\"nextcloud-seccomp-profile.json\"}" "error"="wait on retry: timed out waiting for the condition"
E0615 15:31:12.185172 1 binding.go:167] binding "msg"="failed to get SeccompProfile types.NamespacedName{Namespace:\"nextcloud\", Name:\"nextcloud-seccomp-profile.json\"}" "error"="wait on retry: timed out waiting for the condition"
E0615 15:31:16.289907 1 binding.go:167] binding "msg"="failed to get SeccompProfile types.NamespacedName{Namespace:\"nextcloud\", Name:\"nextcloud-seccomp-profile.json\"}" "error"="wait on retry: timed out waiting for the condition"
E0615 15:31:20.380655 1 binding.go:167] binding "msg"="failed to get SeccompProfile types.NamespacedName{Namespace:\"nextcloud\", Name:\"nextcloud-seccomp-profile.json\"}" "error"="wait on retry: timed out waiting for the condition"
E0615 15:31:25.490892 1 binding.go:167] binding "msg"="failed to get SeccompProfile types.NamespacedName{Namespace:\"nextcloud\", Name:\"nextcloud-seccomp-profile.json\"}" "error"="wait on retry: timed out waiting for the condition"
E0615 15:31:50.051302 1 binding.go:167] binding "msg"="failed to get SeccompProfile types.NamespacedName{Namespace:\"nextcloud\", Name:\"nextcloud-seccomp-profile.json\"}" "error"="wait on retry: timed out waiting for the condition"
E0615 15:32:35.112603 1 binding.go:167] binding "msg"="failed to get SeccompProfile types.NamespacedName{Namespace:\"nextcloud\", Name:\"nextcloud-seccomp-profile.json\"}" "error"="wait on retry: timed out waiting for the condition"
E0615 15:34:01.130210 1 binding.go:167] binding "msg"="failed to get SeccompProfile types.NamespacedName{Namespace:\"nextcloud\", Name:\"nextcloud-seccomp-profile.json\"}" "error"="wait on retry: timed out waiting for the condition"
E0615 15:36:49.069360 1 binding.go:167] binding "msg"="failed to get SeccompProfile types.NamespacedName{Namespace:\"nextcloud\", Name:\"nextcloud-seccomp-profile.json\"}" "error"="wait on retry: timed out waiting for the condition"
E0615 15:42:20.865877 1 binding.go:167] binding "msg"="failed to get SeccompProfile types.NamespacedName{Namespace:\"nextcloud\", Name:\"nextcloud-seccomp-profile.json\"}" "error"="wait on retry: timed out waiting for the condition"
XXXXX@mediaserver:~/Kubernetes/k3s/nextcloud/nextcloud$ k logs -n security-profiles-operator security-profiles-operator-7bcfcc4589-llgqs
I0615 15:16:49.042747 1 logr.go:261] "msg"="Set logging verbosity to 0"
I0615 15:16:49.042816 1 logr.go:261] "msg"="Profiling support enabled: false"
I0615 15:16:49.042863 1 logr.go:261] setup "msg"="starting component: security-profiles-operator" "buildDate"="1980-01-01T00:00:00Z" "buildTags"="netgo,osusergo,seccomp,apparmor" "cgoldFlags"="-lseccomp -lelf -lz -lbpf" "compiler"="gc" "dependencies"="github.com/PuerkitoBio/purell v1.1.1 ,github.com/PuerkitoBio/urlesc v0.0.0-20170810143723-de5bf2ad4578 ,github.com/ReneKroon/ttlcache/v2 v2.11.0 ,github.com/acobaugh/osrelease v0.0.0-20181218015638-a93a0a55a249 ,github.com/aquasecurity/libbpfgo v0.2.5-libbpf-0.7.0 ,github.com/beorn7/perks v1.0.1 ,github.com/blang/semver v3.5.1+incompatible ,github.com/cert-manager/cert-manager v1.8.0 ,github.com/cespare/xxhash/v2 v2.1.2 ,github.com/containers/common v0.48.1-0.20220510094751-400832f41771 ,github.com/cpuguy83/go-md2man/v2 v2.0.1 ,github.com/crossplane/crossplane-runtime v0.16.0 ,github.com/davecgh/go-spew v1.1.1 ,github.com/emicklei/go-restful v2.9.5+incompatible ,github.com/evanphx/json-patch v4.12.0+incompatible ,github.com/fsnotify/fsnotify v1.5.1 ,github.com/go-logr/logr v1.2.3 ,github.com/go-openapi/jsonpointer v0.19.5 ,github.com/go-openapi/jsonreference v0.19.5 ,github.com/go-openapi/swag v0.19.14 ,github.com/gogo/protobuf v1.3.2 ,github.com/golang/groupcache v0.0.0-20210331224755-41bb18bfe9da ,github.com/golang/protobuf v1.5.2 ,github.com/google/gnostic v0.5.7-v3refs ,github.com/google/go-cmp v0.5.6 ,github.com/google/gofuzz v1.2.0 ,github.com/google/uuid v1.3.0 ,github.com/imdario/mergo v0.3.12 ,github.com/josharian/intern v1.0.0 ,github.com/json-iterator/go v1.1.12 ,github.com/mailru/easyjson v0.7.6 ,github.com/matttproud/golang_protobuf_extensions v1.0.2-0.20181231171920-c182affec369 ,github.com/modern-go/concurrent v0.0.0-20180306012644-bacd9c7ef1dd ,github.com/modern-go/reflect2 v1.0.2 ,github.com/munnerz/goautoneg v0.0.0-20191010083416-a7dc8b61c822 ,github.com/nxadm/tail v1.4.8 ,github.com/opencontainers/runtime-spec v1.0.3-0.20210326190908-1c3f411f0417 ,github.com/openshift/api v0.0.0-20220209124712-b632c5fc10c0 ,github.com/pjbgf/go-apparmor v0.0.7 ,github.com/pkg/errors v0.9.1 ,github.com/prometheus-operator/prometheus-operator/pkg/apis/monitoring v0.57.0 ,github.com/prometheus/client_golang v1.12.2 ,github.com/prometheus/client_model v0.2.0 ,github.com/prometheus/common v0.32.1 ,github.com/prometheus/procfs v0.7.3 ,github.com/russross/blackfriday/v2 v2.1.0 ,github.com/seccomp/libseccomp-golang v0.9.2-0.20210429002308-3879420cc921 ,github.com/sirupsen/logrus v1.8.1 ,github.com/spf13/afero v1.8.0 ,github.com/spf13/pflag v1.0.5 ,github.com/urfave/cli/v2 v2.8.1 ,github.com/xrash/smetrics v0.0.0-20201216005158-039620a65673 ,golang.org/x/net v0.0.0-20220225172249-27dd8689420f ,golang.org/x/oauth2 v0.0.0-20211104180415-d3ed0bb246c8 ,golang.org/x/sync v0.0.0-20210220032951-036812b2e83c ,golang.org/x/sys v0.0.0-20220422013727-9388b58f7150 ,golang.org/x/term v0.0.0-20210927222741-03fcf44c2211 ,golang.org/x/text v0.3.7 ,golang.org/x/time v0.0.0-20220210224613-90d013bbcef8 ,gomodules.xyz/jsonpatch/v2 v2.2.0 ,google.golang.org/genproto v0.0.0-20220304144024-325a89244dc8 ,google.golang.org/grpc v1.47.0 ,google.golang.org/protobuf v1.28.0 ,gopkg.in/inf.v0 v0.9.1 ,gopkg.in/tomb.v1 v1.0.0-20141024135613-dd632973f1e7 ,gopkg.in/yaml.v2 v2.4.0 ,gopkg.in/yaml.v3 v3.0.1 ,k8s.io/api v0.24.1 ,k8s.io/apiextensions-apiserver v0.24.0 ,k8s.io/apimachinery v0.24.1 ,k8s.io/client-go v0.24.1 ,k8s.io/component-base v0.24.0 ,k8s.io/klog/v2 v2.60.1 ,k8s.io/kube-openapi v0.0.0-20220328201542-3ee0da9b0b42 ,k8s.io/utils v0.0.0-20220210201930-3a6ce19ff2f9 ,sigs.k8s.io/controller-runtime v0.12.1 ,sigs.k8s.io/gateway-api v0.4.1 ,sigs.k8s.io/json v0.0.0-20211208200746-9f7c6b3444d2 ,sigs.k8s.io/release-utils v0.7.0 ,sigs.k8s.io/structured-merge-diff/v4 v4.2.1 ,sigs.k8s.io/yaml v1.3.0 " "gitCommit"="unknown" "gitCommitDate"="unknown" "gitTreeState"="clean" "goVersion"="go1.18.2" "ldFlags"="-s -w -linkmode external -extldflags \"-static\" -X sigs.k8s.io/security-profiles-operator/internal/pkg/version.buildDate=1980-01-01T00:00:00Z -X sigs.k8s.io/security-profiles-operator/internal/pkg/version.version=0.4.3" "libbpf"="v0.7" "libseccomp"="2.5.3" "platform"="linux/amd64" "version"="0.4.3"
I0615 15:16:49.899510 1 logr.go:261] controller-runtime/metrics "msg"="Metrics server is starting to listen" "addr"=":8080"
I0615 15:16:49.909303 1 logr.go:261] setup "msg"="starting manager"
I0615 15:16:49.909532 1 internal.go:362] "msg"="Starting server" "addr"={"IP":"::","Port":8080,"Zone":""} "kind"="metrics" "path"="/metrics"
I0615 15:16:50.010046 1 leaderelection.go:248] attempting to acquire leader lease security-profiles-operator/security-profiles-operator-lock...
I0615 15:17:07.118393 1 leaderelection.go:258] successfully acquired lease security-profiles-operator/security-profiles-operator-lock
I0615 15:17:07.118664 1 controller.go:185] "msg"="Starting EventSource" "controller"="pods" "controllerGroup"="" "controllerKind"="Pod" "source"="kind source: *v1alpha1.SecurityProfileNodeStatus"
I0615 15:17:07.118684 1 controller.go:193] "msg"="Starting Controller" "controller"="pods" "controllerGroup"="" "controllerKind"="Pod"
I0615 15:17:07.118873 1 controller.go:185] "msg"="Starting EventSource" "controller"="pods" "controllerGroup"="" "controllerKind"="Pod" "source"="kind source: *v1.Pod"
I0615 15:17:07.118885 1 controller.go:193] "msg"="Starting Controller" "controller"="pods" "controllerGroup"="" "controllerKind"="Pod"
I0615 15:17:07.119055 1 controller.go:185] "msg"="Starting EventSource" "controller"="pods" "controllerGroup"="" "controllerKind"="Pod" "source"="kind source: *v1alpha1.SecurityProfilesOperatorDaemon"
I0615 15:17:07.119073 1 controller.go:185] "msg"="Starting EventSource" "controller"="pods" "controllerGroup"="" "controllerKind"="Pod" "source"="kind source: *v1.DaemonSet"
I0615 15:17:07.119083 1 controller.go:193] "msg"="Starting Controller" "controller"="pods" "controllerGroup"="" "controllerKind"="Pod"
I0615 15:17:07.219262 1 controller.go:227] "msg"="Starting workers" "controller"="pods" "controllerGroup"="" "controllerKind"="Pod" "worker count"=1
I0615 15:17:07.219415 1 controller.go:227] "msg"="Starting workers" "controller"="pods" "controllerGroup"="" "controllerKind"="Pod" "worker count"=1
I0615 15:17:07.219440 1 controller.go:227] "msg"="Starting workers" "controller"="pods" "controllerGroup"="" "controllerKind"="Pod" "worker count"=1
[...]
I0615 15:55:29.283268 1 ca.go:62] spod-config "msg"="Using cert-manager as certificate provider"
I0615 15:55:47.384211 1 nodestatus.go:117] nodestatus "msg"="Reconciling node status" "namespace"="nextcloud" "nodeStatus"="redis-seccomp-profile-mediaserver.fritz.box"
I0615 15:55:47.384562 1 nodestatus.go:189] nodestatus "msg"="Setting the status to" "namespace"="nextcloud" "nodeStatus"="redis-seccomp-profile-mediaserver.fritz.box" "Status"="Installed"
I0615 15:55:47.384672 1 nodestatus.go:278] nodestatus "msg"="Updating status" "Profile.Kind"={"Group":"security-profiles-operator.x-k8s.io","Version":"v1beta1","Kind":"SeccompProfile"} "Profile.Name"="redis-seccomp-profile" "Profile.Namespace"="nextcloud" "namespace"="nextcloud" "nodeStatus"="redis-seccomp-profile-mediaserver.fritz.box"
I0615 15:55:47.403702 1 nodestatus.go:117] nodestatus "msg"="Reconciling node status" "namespace"="security-profiles-operator" "nodeStatus"="log-enricher-trace-mediaserver.fritz.box"
I0615 15:55:47.403997 1 nodestatus.go:189] nodestatus "msg"="Setting the status to" "namespace"="security-profiles-operator" "nodeStatus"="log-enricher-trace-mediaserver.fritz.box" "Status"="Installed"
I0615 15:55:47.404099 1 nodestatus.go:278] nodestatus "msg"="Updating status" "Profile.Kind"={"Group":"security-profiles-operator.x-k8s.io","Version":"v1beta1","Kind":"SeccompProfile"} "Profile.Name"="log-enricher-trace" "Profile.Namespace"="security-profiles-operator" "namespace"="security-profiles-operator" "nodeStatus"="log-enricher-trace-mediaserver.fritz.box"
I0615 15:55:47.416955 1 nodestatus.go:117] nodestatus "msg"="Reconciling node status" "namespace"="security-profiles-operator" "nodeStatus"="nginx-1.19.1-mediaserver.fritz.box"
I0615 15:55:47.417269 1 nodestatus.go:189] nodestatus "msg"="Setting the status to" "namespace"="security-profiles-operator" "nodeStatus"="nginx-1.19.1-mediaserver.fritz.box" "Status"="Installed"
I0615 15:55:47.417366 1 nodestatus.go:278] nodestatus "msg"="Updating status" "Profile.Kind"={"Group":"security-profiles-operator.x-k8s.io","Version":"v1beta1","Kind":"SeccompProfile"} "Profile.Name"="nginx-1.19.1" "Profile.Namespace"="security-profiles-operator" "namespace"="security-profiles-operator" "nodeStatus"="nginx-1.19.1-mediaserver.fritz.box"
I0615 15:55:47.431151 1 nodestatus.go:117] nodestatus "msg"="Reconciling node status" "namespace"="nextcloud" "nodeStatus"="nextcloud-seccomp-profile-mediaserver.fritz.box"
I0615 15:55:47.431472 1 nodestatus.go:189] nodestatus "msg"="Setting the status to" "namespace"="nextcloud" "nodeStatus"="nextcloud-seccomp-profile-mediaserver.fritz.box" "Status"="Installed"
I0615 15:55:47.431577 1 nodestatus.go:278] nodestatus "msg"="Updating status" "Profile.Kind"={"Group":"security-profiles-operator.x-k8s.io","Version":"v1beta1","Kind":"SeccompProfile"} "Profile.Name"="nextcloud-seccomp-profile" "Profile.Namespace"="nextcloud" "namespace"="nextcloud" "nodeStatus"="nextcloud-seccomp-profile-mediaserver.fritz.box"
I0615 15:55:47.449213 1 nodestatus.go:117] nodestatus "msg"="Reconciling node status" "namespace"="nextcloud" "nodeStatus"="nginx-seccomp-profile-mediaserver.fritz.box"
I0615 15:55:47.449525 1 nodestatus.go:189] nodestatus "msg"="Setting the status to" "namespace"="nextcloud" "nodeStatus"="nginx-seccomp-profile-mediaserver.fritz.box" "Status"="Installed"
I0615 15:55:47.449626 1 nodestatus.go:278] nodestatus "msg"="Updating status" "Profile.Kind"={"Group":"security-profiles-operator.x-k8s.io","Version":"v1beta1","Kind":"SeccompProfile"} "Profile.Name"="nginx-seccomp-profile" "Profile.Namespace"="nextcloud" "namespace"="nextcloud" "nodeStatus"="nginx-seccomp-profile-mediaserver.fritz.box"
I0615 15:55:47.467572 1 nodestatus.go:117] nodestatus "msg"="Reconciling node status" "namespace"="nextcloud" "nodeStatus"="mariadb-seccomp-profile-mediaserver.fritz.box"
I0615 15:55:47.467972 1 nodestatus.go:189] nodestatus "msg"="Setting the status to" "namespace"="nextcloud" "nodeStatus"="mariadb-seccomp-profile-mediaserver.fritz.box" "Status"="Installed"
I0615 15:55:47.468075 1 nodestatus.go:278] nodestatus "msg"="Updating status" "Profile.Kind"={"Group":"security-profiles-operator.x-k8s.io","Version":"v1beta1","Kind":"SeccompProfile"} "Profile.Name"="mariadb-seccomp-profile" "Profile.Namespace"="nextcloud" "namespace"="nextcloud" "nodeStatus"="mariadb-seccomp-profile-mediaserver.fritz.box"
I0615 15:55:57.009263 1 ca.go:62] spod-config "msg"="Using cert-manager as certificate provider"
---
# Source: nextcloud/templates/deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
annotations:
keel.sh/policy: "minor"
name: nextcloud
namespace: nextcloud
labels:
app: nextcloud
spec:
replicas: 1
strategy:
type: Recreate
selector:
matchLabels:
app: nextcloud
template:
metadata:
annotations:
container.apparmor.security.beta.kubernetes.io/nextcloud: localhost/container-nextcloud
container.apparmor.security.beta.kubernetes.io/nginx: localhost/container-nginx
labels:
app: nextcloud
spec:
automountServiceAccountToken: false
containers:
- name: nextcloud
image: "nextcloud:23.0.5-fpm-alpine"
env:
- name: MYSQL_HOST
value: "mariadb.nextcloud"
- name: MYSQL_DATABASE
value: "nextcloud"
- name: MYSQL_USER
value: "nextcloud"
- name: MYSQL_PASSWORD
valueFrom:
secretKeyRef:
name: mariadb-secret
key: mariadb-password
- name: NEXTCLOUD_ADMIN_USER
valueFrom:
secretKeyRef:
name: nextcloud-creds
key: nextcloud-username
- name: NEXTCLOUD_ADMIN_PASSWORD
valueFrom:
secretKeyRef:
name: nextcloud-creds
key: nextcloud-password
- name: NEXTCLOUD_TRUSTED_DOMAINS
value: "nextcloud.mediaserver.fritz.box"
- name: NEXTCLOUD_DATA_DIR
value: "/var/www/html/data"
- name: REDIS_HOST
value: redis-master
- name: REDIS_HOST_PORT
value: "6379"
- name: REDIS_HOST_PASSWORD
valueFrom:
secretKeyRef:
name: redis
key: redis-password
- name: PHP_UPLOAD_LIMIT
value: "4096M"
livenessProbe:
tcpSocket:
port: 9000
initialDelaySeconds: 10
periodSeconds: 10
timeoutSeconds: 5
successThreshold: 1
failureThreshold: 3
readinessProbe:
tcpSocket:
port: 9000
initialDelaySeconds: 10
periodSeconds: 10
timeoutSeconds: 5
successThreshold: 1
failureThreshold: 5
startupProbe:
tcpSocket:
port: 9000
initialDelaySeconds: 10
periodSeconds: 10
timeoutSeconds: 5
successThreshold: 1
failureThreshold: 60
securityContext:
runAsUser: 82
allowPrivilegeEscalation: false
privileged: false
runAsNonRoot: true
capabilities:
drop:
- ALL
resources:
requests:
memory: 100Mi
cpu: 10m
limits:
memory: 1Gi
volumeMounts:
- name: nextcloud-main
mountPath: /var/www/
subPath: root
- name: nextcloud-main
mountPath: /var/www/html
subPath: html
- name: nextcloud-data
mountPath: /var/www/html/data
- name: nextcloud-data-XXXXX
mountPath: /var/www/html/data/XXXXX
- name: nextcloud-data-XXXXX
mountPath: /var/www/html/data/XXXXX
- name: nextcloud-main
mountPath: /var/www/html/config
subPath: config
- name: nextcloud-main
mountPath: /var/www/html/custom_apps
subPath: custom_apps
- name: nextcloud-main
mountPath: /var/www/tmp
subPath: tmp
- name: nextcloud-main
mountPath: /var/www/html/themes
subPath: themes
- name: nginx
image: nginx:1.22.0-alpine
ports:
- name: http
containerPort: 80
resources:
requests:
memory: 100Mi
cpu: 50m
limits:
memory: 500Mi
livenessProbe:
httpGet:
path: /status.php
port: http
httpHeaders:
- name: Host
value: "localhost"
initialDelaySeconds: 10
periodSeconds: 10
timeoutSeconds: 5
successThreshold: 1
failureThreshold: 3
readinessProbe:
httpGet:
path: /status.php
port: http
httpHeaders:
- name: Host
value: "localhost"
initialDelaySeconds: 10
periodSeconds: 10
timeoutSeconds: 5
successThreshold: 1
failureThreshold: 5
startupProbe:
httpGet:
path: /status.php
port: http
httpHeaders:
- name: Host
value: "localhost"
initialDelaySeconds: 10
periodSeconds: 10
timeoutSeconds: 5
successThreshold: 1
failureThreshold: 60
securityContext:
allowPrivilegeEscalation: false
capabilities:
add:
- NET_BIND_SERVICE
volumeMounts:
- mountPath: /etc/nginx/conf.d/default.conf
name: nginx-default-conf
readOnly: true
subPath: default.conf
- mountPath: /etc/nginx/conf.d/nextcloud.conf
name: nginx-nextcloud-conf
readOnly: true
subPath: nextcloud.conf
- mountPath: /etc/nginx/nginx.conf
name: nginx-server-conf
readOnly: true
subPath: nginx.conf
- name: nextcloud-main
mountPath: /var/www/
subPath: root
- name: nextcloud-main
mountPath: /var/www/html
subPath: html
- name: nextcloud-data
mountPath: /var/www/html/data
- name: nextcloud-data-XXXXX
mountPath: /var/www/html/data/XXXXX
- name: nextcloud-data-XXXXX
mountPath: /var/www/html/data/XXXXX
- name: nextcloud-main
mountPath: /var/www/html/config
subPath: config
- name: nextcloud-main
mountPath: /var/www/html/custom_apps
subPath: custom_apps
- name: nextcloud-main
mountPath: /var/www/tmp
subPath: tmp
- name: nextcloud-main
mountPath: /var/www/html/themes
subPath: themes
volumes:
- name: nextcloud-main
persistentVolumeClaim:
claimName: nextcloud-system
- name: nextcloud-data
persistentVolumeClaim:
claimName: nextcloud-data
- name: nextcloud-data-XXXXX
persistentVolumeClaim:
claimName: nextcloud-data-XXXXX
- name: nextcloud-data-XXXXX
persistentVolumeClaim:
claimName: nextcloud-data-XXXXX
- name: nginx-default-conf
configMap:
name: nextcloud-nginx-config
items:
- key: default.conf
path: default.conf
mode: 0444
- name: nginx-nextcloud-conf
configMap:
name: nextcloud-nginx-config
items:
- key: nextcloud.conf
path: nextcloud.conf
mode: 0444
- name: nginx-server-conf
configMap:
name: nextcloud-nginx-config
items:
- key: nginx.conf
path: nginx.conf
mode: 0444
# Will mount configuration files as www-data (id: 82) for nextcloud
securityContext:
fsGroup: 82
serviceAccountName: nextcloud-serviceaccount
On a side note, the operator logs are being absolutely stuffed with these "Reconciling node status", "Setting the status to" and "Updating status" messages, even though nothing has changed.
Not to turn this bug report into a Q&A, but I just can't seem to get these seccomp profiles to work with the operator.
When I specify the seccomp profile directly in the deployment YAML, it does work:
securityContext:
allowPrivilegeEscalation: false
capabilities:
add:
- NET_BIND_SERVICE
seccompProfile:
type: Localhost
localhostProfile: operator/nextcloud/nginx-seccomp-profile.json
So there is something wrong with the ProfileBinding.
I split the issue about the verbose logging in nodestatus into its own ticket
The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs.
This bot triages issues and PRs according to the following rules:
lifecycle/stale
is appliedlifecycle/stale
was applied, lifecycle/rotten
is appliedlifecycle/rotten
was applied, the issue is closedYou can:
/remove-lifecycle stale
/lifecycle rotten
/close
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle stale
/remove-lifecycle stale
we got the same bug report in OCP /assign
What happened:
The operator immediately crashed after installation with an index out of range error.
What you expected to happen:
I expect the operator to function after installation, not immediately crash.
How to reproduce it (as minimally and precisely as possible):
Anything else we need to know?:
Nextcloud has been rolled out with Helm (with MariaDB and Redis), with pretty basic settings. I initially thought that the zero in the name caused the issue, but Redis is also a statefulset and it doesn't complain about the zero in it's name.
Running on Ubuntu 22.04 with k3s.
Kubernetes version
Cloud provider
OS version
Install tools
Container runtime (CRI) and version (if applicable)
Related plugins (CNI, CSI, ...) and versions (if applicable)