Open danielpacak opened 3 years ago
i want to fix error
Applicable to both #579 and this issue.
I've just installed
$ starboard version
Starboard Version: {Version:0.14.1 Commit:5672fd4a4d608d9b094802098f3e950ec396ff51 Date:2022-01-25T17:38:43Z}
I believe the reason for failure is due to the SCC the starboard service account (named: starboard
) is permitted to use vs what it needs for the securityContext
and host
requirements in the jobs.
apiVersion: batch/v1
kind: Job
metadata:
name: scan-kubehunterreports-7594df9b45
namespace: starboard
spec:
template:
spec:
containers:
securityContext:
allowPrivilegeEscalation: false
capabilities:
add:
- NET_RAW
drop:
- all
privileged: false
readOnlyRootFilesystem: false
hostPID: true
securityContext:
runAsGroup: 0
runAsUser: 0
seccompProfile:
type: RuntimeDefault
serviceAccount: starboard
serviceAccountName: starboard
---
apiVersion: batch/v1
kind: Job
metadata:
name: scan-cisbenchmark-586b9df6c
namespace: starboard
spec:
template:
spec:
containers:
securityContext:
allowPrivilegeEscalation: false
capabilities:
drop:
- all
privileged: false
readOnlyRootFilesystem: true
hostPID: true
securityContext:
runAsGroup: 0
runAsUser: 0
seccompProfile:
type: RuntimeDefault
serviceAccount: starboard
serviceAccountName: starboard
---
Key requirements in both which aren't compatible with the default SCC, named restricted
, are:
hostPID: true
securityContext:
runAsGroup: 0
runAsUser: 0
seccompProfile:
type: RuntimeDefault
Along with this setting In the kube-hunter job:
capabilities:
add:
- NET_RAW
(Default capabilities in CRI-O are listed here, under "default_capabilities" - they are CHOWN, DAC_OVERRIDE, FSETID, FOWNER, SETGID, SETUID, SETPCAP, NET_BIND_SERVICE, KILL
).
The restricted SCC only permits the default set, with a requirement to drop some of them, this means NET_RAW
and restricted SCC aren't compatible, along with it being incompatible with the hostPID access, specification of a seccomp profile and the use of run user and/or group.
There is an SCC called hostaccess
which allows access to the hostPID (and others):
$ oc get scc hostaccess -oyaml | grep allowHost
allowHostDirVolumePlugin: true
allowHostIPC: true
allowHostNetwork: true
allowHostPID: true
allowHostPorts: true
However permitted user and group IDs for the SCC must be from within the namespace range and thus it is incompatible with runAsUser:0
and runAsGroup: 0
, it is also incompatible with the seccomp specification.
This means that of the opinionated SCCs provided by OpenShift; restricted
, any of the host*
, nonroot
and anyuid
are all incompatible against the securityContexts and host access required.
This leaves the most open SCC privileged
which comes with the following warning from Red Hat "This is the most relaxed SCC and should be used only for cluster administration. Grant with caution." It really should only be used as a last resort. Instead, a bespoke SCC permitting the required securityContexts and host access should be created, and the serviceAccount starboard
should be granted permission to use it via a clusterrolebinding (or use of oc adm add-scc-to-user -z starboard
). Ideally, separate serviceAccounts should be created for kube-bench, kube-hunter and the other jobs, to ensure granularity of permission granting - i.e. only grant minimal privilege as required to the jobs/serviceAccounts.
SCC should be something like:
kind: SecurityContextConstraints
apiVersion: "security.openshift.io/v1"
metadata:
name: starboard
allowHostDirVolumePlugin: false
allowHostIPC: false
allowHostNetwork: false
allowHostPID: true
allowHostPorts: false
allowPrivilegeEscalation: true
allowPrivilegedContainer: false
allowedCapabilities:
- NET_RAW
defaultAddCapabilities: null
requiredDropCapabilities:
- KILL
- MKNOD
- SETUID
- SETGID
runAsUser:
type: RunAsAny
seLinuxContext:
type: MustRunAs
fsGroup:
type: RunAsAny
readOnlyRootFilesystem: false
supplementalGroups:
type: RunAsAny
seccompProfiles:
- '*'
volumes:
- configMap
- downwardAPI
- emptyDir
- persistentVolumeClaim
- projected
- secret
Then a clusterRole would be needed, for use in the clusterrolebinding, something like:
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
name: scc-starboard
rules:
- apiGroups:
- security.openshift.io
resourceNames:
- starboard
resources:
- securitycontextconstraints
verbs:
- use
And then bound with:
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: scc-starboard
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: scc-starboard
subjects:
- kind: ServiceAccount
name: starboard
namespace: starboard
What steps did you take and what happened:
What did you expect to happen:
CISKubeBenchReport instances created for each cluster node.
Anything else you would like to add:
N/A
Environment:
starboard version
): v0.10.2kubectl version
): any