The AWS provider for the Secrets Store CSI Driver allows you to fetch secrets from AWS Secrets Manager and AWS Systems Manager Parameter Store, and mount them into Kubernetes pods.
Apache License 2.0
474
stars
134
forks
source link
Fix support for non-default kubelet root directory #330
This fixes the change from #322. It was impractical for me to validate that change due to the pain of migrating, but I have validated this works in an EKS cluster in our account.
It turns out that the container-local path also needs to be updated to match what the path is on the host. Intuitively it seemed that updating the container-local path would have broken assumptions in the provider, but counter-intuitively it turns out that the mismatch between container and host path strings breaks the CSI driver stack.
The breakage presents as the upstream CSI driver (not the AWS provider) being unable to mount the secrets volumes in pods. See below for a sanitized example error from the CSI driver workload.
secrets-store I0320 01:22:00.029130 1 nodeserver.go:359] "Using gRPC client" provider="aws" pod="my-workload-66878c4c8c-98k8f"
secrets-store E0320 01:22:00.474832 1 nodeserver.go:242] "failed to mount secrets store object content" err="rpc error: code = Unknown desc = open /custom/kubelet-dir/pods/74d737a3-e6cd-4df2-af2c-2f51143e25ef/volumes/kubernetes.io~csi/secrets-store/mount/arn:aws:secretsmanager:us-east-1:123456789012:secret:my-cool-secret-AaBbCc1234567890: no such file or directory" pod="my-namespace/my-workload-66878c4c8c-98k8f"
secrets-store I0320 01:22:00.474875 1 nodeserver.go:88] "unmounting target path as node publish volume failed" targetPath="/custom/kubelet-dir/pods/74d737a3-e6cd-4df2-af2c-2f51143e25ef/volumes/kubernetes.io~csi/secrets-store/mount" pod="my-namespace/my-workload-66878c4c8c-98k8f"
By submitting this pull request, I confirm that you can use, modify, copy, and redistribute this contribution, under the terms of your choice.
This fixes the change from #322. It was impractical for me to validate that change due to the pain of migrating, but I have validated this works in an EKS cluster in our account.
It turns out that the container-local path also needs to be updated to match what the path is on the host. Intuitively it seemed that updating the container-local path would have broken assumptions in the provider, but counter-intuitively it turns out that the mismatch between container and host path strings breaks the CSI driver stack.
The breakage presents as the upstream CSI driver (not the AWS provider) being unable to mount the secrets volumes in pods. See below for a sanitized example error from the CSI driver workload.
By submitting this pull request, I confirm that you can use, modify, copy, and redistribute this contribution, under the terms of your choice.