Open klagroix opened 1 year ago
How are you adding the environment variables? Are you adding them to the pod directly?
Did you try adding these environment variables to the tabletPools
config as extraEnv
? You can specify extra environnment variables there and these are added to all the mysqld and vttablet pods.
Hello, when using AWS EKS and IAM Roles for Service Accounts (IRSA), service accounts are annotated with an IAM role ARN.
EKS automatically injects AWS_*
environment variables into pods that are using the service account.
As these AWS_*
values are dynamic, I cannot add these as an extraEnv
.
I did try add fake/placeholder values for these environment variables in extraEnv
however vitess-operator still saw a difference and attempted to re-create the pods.
It would be nice if we could tell vitess-operator to ignore specific environment variables from the diff.
We're trying to use vitess-operator with an S3 backup spec defined in our VitessCluster manifest. Example as follows:
From the logs, it appears that
vttablet
attempts to read the backup S3 bucket to see it it needs to restore from the latest backup. As such, we need to provide AWS credentials to allow the pod access to read the S3 bucket.Typically, we use IAM Roles for Service Accounts (IRSA) which allows annotating a Service Account with an IAM Role arn: https://docs.aws.amazon.com/eks/latest/userguide/associate-service-account-role.html
AWS_*
environment variables are added to the pod so the pod can authenticate and use the the IAM role.The issue here is that vitess-operator seems to see these
AWS_
environment variables on the pod and attempts to$patch: delete
these variables.Example log taken from vitess-operator:
Is there any way to force vitess-operator to ignore the
AWS_*
environment variable discrepancies?For context, these tests were performed on an AWS-hosted EKS cluster using vitess-operator
v2.9.0-rc1