Closed aivanov-citc closed 11 months ago
It is up to you where the provisioner is deployed, the driver itself has not preference
We did several checks on deploying test pods to different namespaces and made sure that the provisioner pod always runs in the "default" namespace. How can we manage it?
We did several checks on deploying test pods to different namespaces and made sure that the provisioner pod always runs in the "default" namespace. How can we manage it?
I have problems understanding what you are aiming for, maybe you can create a PR which shows the Problem.
I'm trying to deploy a test pod
apiVersion: v1
kind: Pod
metadata:
name: busybox
namespace: csi-lvm-system
spec:
containers:
- name: hello-container
image: busybox
command: ["sh","-c","sleep 3600"]
volumeMounts:
- mountPath: /mnt/store
name: storage
volumes:
- name: storage
persistentVolumeClaim:
claimName: storage-claim
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: storage-claim
namespace: csi-lvm-system
spec:
storageClassName: csi-driver-lvm-linear
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 1Gi
In whatever namespace I deploy this pod to "default", "test", "csi-lvm-system", the pod responsible for creating lv (create-pvc- ххххххх) is always deployed in the "default" namespace. Since "create-pvc-ххххххх" is privileged, it would be logical to create it in the namespace of the "csi-lvm-system" driver itself, apply annotations only to it, and not to the default namespace
$ kubectl get pods -A
csi-lvm-system busybox 0/1 Pending 0 2s
default create-pvc-dd2780e5-8b79-4620-b9c9-c5420a76abf0 0/1 ContainerCreating 0 1s
Maybe we can just create a pull request for a flag (--namespace
), which passes the namespace on to the provisioner pod metadata.
We can use environment field refs for injecting the namespace in our manifests and helm-charts like:
- name: CSI_DRIVER_LVM_NAMESPACE
valueFrom:
fieldRef:
fieldPath: metadata.namespace
This resolves the problem, right?
I think yes, it does. Thank you
Hey @aivanov-citc,
I just looked at the problem and found out a few things.
There is already a --namespace
flag in the plugin (see here). The flag is used for provisioner pod to be deployed in the given namespace: https://github.com/metal-stack/csi-driver-lvm/blob/v0.5.2/pkg/lvm/lvm.go#L395.
The flag is set through the helm-chart automatically: https://github.com/metal-stack/helm-charts/blob/v0.3.32/charts/csi-driver-lvm/templates/plugin.yaml#L176. Did you deploy this project through our helm repo? Otherwise, maybe you missed setting the existing --namespace
flag for the lvm plugin?
In #93, I created a branch that activates Pod Security on the Kind cluster. For the integration tests, I deployed the driver to a dedicated csi-driver-lvm
namespace. During the integration tests, you can see that the provisioner pods are correctly deployed to the same plugin's namespace and not to the default namespace:
❯ k get po -A
NAMESPACE NAME READY STATUS RESTARTS AGE
csi-driver-lvm create-pvc-7a7013ea-1b39-464d-baf7-50dad87a356b 0/1 ContainerCreating 0 1s
csi-driver-lvm csi-driver-lvm-controller-0 3/3 Running 0 9s
csi-driver-lvm csi-driver-lvm-plugin-b4265 3/3 Running 0 9s
default volume-test 0/1 Pending 0 1s
default volume-test-inline-xfs 0/1 Terminating 0 49m
Hey @Gerrit91. So it is, I'm sorry, I did not see it. I close issue.
Talos clusters use Pod Security Standards by default and do not allow the creation of privileged pods. To create privileged pods in a namespace, you need to add special annotations to the namespace.
Now the Provisioner Pod is created in the default namespace
Since the Provisioner Pod is privileged, please create a Provisioner Pod in the namespace csi-driver-lvm so as not to add annotations to the default namespace