Closed ocni-dtu closed 2 months ago
Hey @ocni-dtu, thanks for reporting the issue.
Could you please check if you created your aws-secret
in kube-system
namespace with the correct keys? If you're using the default Helm chart values, it should be something like this:
$ kubectl create secret generic aws-secret \
--namespace kube-system \
--from-literal "key_id=${AWS_ACCESS_KEY_ID}" \
--from-literal "access_key=${AWS_SECRET_ACCESS_KEY}"
If the issue still persists, could you please try again with the version v1.8.0
? We changed something relevant to long-term AWS credentials in v1.8.1
, it might be related to that.
Yes, the aws-secret
were created in kube-system
. However not with keys key_id
and access_key
, but they have been updated in the helm chart values:
awsAccessSecret:
keyId: accesskeyid
accessKey: accessskeysecret
Which I now can see that I made a typo in... 🤦🏻
And can report that it works both in 1.8.1 and 1.8.0
Thanks for the report @ocni-dtu! Happy to hear that your problem solved.
/kind bug
What happened? Trying to get the csi driver to work on an EC2 instance with K3s installed.
Getting the following error:
What you expected to happen? The pvc to mount
How to reproduce it (as minimally and precisely as possible)? Installed the csi driver with Helm. The
aws-secret
is created in advance.The PV, PVC setup looks like this:
Anything else we need to know?:
Environment
kubectl version
): v1.29.4+k3s1