Closed nachiket-lab closed 4 years ago
Not sure why, but I applied the following rolebinding and it worked. Seems wierd.
Anyway, would it make sense to have a section in the Documentation explaining a little about the right way to setup PSP with minimum privileges for strimzi?
kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: psp:kafka
rules:
- apiGroups:
- policy
resources:
- podsecuritypolicies
resourceNames:
- kafka # the psp we are giving access to
verbs:
- use
---
# This applies psp/restricted to all authenticated users
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: psp:kafka
subjects:
- kind: ServiceAccount
name: cdc-entity-operator
namespace: kafka
- kind: ServiceAccount
name: cdc-kafka
namespace: kafka
- kind: ServiceAccount
name: cdc-zookeeper
namespace: kafka
roleRef:
kind: ClusterRole
name: psp:kafka
apiGroup: rbac.authorization.k8s.io
Interesting, it doesn't seem to differ form the previous one apart from the missing default service account. Shouldn't the PSP inherit through the service accounts? I.e. wouldn't it be enough to set it for the operator service account before it creates the new service accounts?
Haven't tried it. The reason was that the operator worked with the default PSP applied on my cluster (restricted).
However, the deployment couldn't start due to the fact that it runs as gid 0. This did not allow any pod to be created in the stateful set as the restricted policy forbids it. This seems to be a WIP according to some issue i read here. Are there any plans to do away with this config?
However, the deployment couldn't start due to the fact that it runs as gid 0. This did not allow any pod to be created in the stateful set as the restricted policy forbids it. This seems to be a WIP according to some issue i read here. Are there any plans to do away with this config?
That is not really a work in progress - I do not think there is any work going on on that. You can configure the security context of the pods in the Kafka CR so you can configure it to match your policies. In some Kubernetes distributions - for example on OpenShift - the context is injected automatically. But on pure Kubernetes you might need to do it your self depending on the policy you have.
Something feels broken here. The cluster operator deployed using olm is running as the following user:
Now this is similar to the UID/GID of the Kafka and Zookeeper StatefulSets. I can also see that the container is actually using the z-restricted
PSP.
However, the containers fail to run in the Kafka NS with this PSP. I followed your recommendation and the EXCELLENT documentation to specify a securitycontext and run the stack successfully with a restricted PSP.
Great work so far on Strimzi you guys!
Ref:
apiVersion: kafka.strimzi.io/v1beta1
kind: Kafka
metadata:
name: cdc
spec:
kafka:
version: 2.5.0
template:
pod:
securityContext:
runAsUser: 1001
runAsGroup: 1001
fsGroup: 1001
replicas: 3
listeners:
plain: {}
tls: {}
config:
offsets.topic.replication.factor: 3
transaction.state.log.replication.factor: 3
transaction.state.log.min.isr: 2
log.message.format.version: "2.5"
storage:
type: jbod
volumes:
- id: 0
type: persistent-claim
size: 50Gi
deleteClaim: false
- id: 1
type: persistent-claim
size: 50Gi
deleteClaim: false
zookeeper:
template:
pod:
securityContext:
runAsUser: 1001
runAsGroup: 1001
fsGroup: 1001
replicas: 3
storage:
type: persistent-claim
size: 50Gi
deleteClaim: false
entityOperator:
topicOperator: {}
userOperator: {}
Hi Team,
I am trying to deploy a Kafka cluster using the
strimzi-kafka-operator
(installed byolm
), on my cluster which has PSP enabled (v1.19.1
).I have managed to get the operator to deploy a cluster with the following PSP:
But I cannot seem to figure out the appropriate role binding. Currently, I have to set all authenticated users to use the PSP for the operator to deploy the cluster.
This RoleBinding seems to be too permissive. I ran
kubect get sa -n kafka
and it returned the following three service accounts:I tried granting permissions to these service account in my rolebinding, but it did not work and the cluster was not deployed.
I am using the yaml in the examples to deploy a Kafka cluster. What is it that I am missing?