with version 1.8.1-preview of this code repo. I am running apigee hybrid on AWS EKS and on Cluster level we have disabled AutomountServiceAccountToken for default SA. This is as per Security best practices.
controller manager deployment and RBAC are expecting default service account and both the containers kube-rbac-proxy and manager failed to start because they are not able to mount a default service account.
Recommendation
The recommendation is to fix this by adding a separate ServiceAccount for the controller manager same as ingress-manager deployment and also to modify the controller manager rbac to refer to that service account instead of the default SA.
Fix:
In order to fix this issue, I created a separate Service Account named apigee-controller-manager and referred in controller Deployment via serviceAccountName: apigee-controller-managerattribute and also updated its rbac. Refer to the yamls attached.
This fixed the issue with kube-rbac-proxy container it started running but the manager container is still not working. Below is the error I am getting from the manager container logs
{"level":"error","ts":1667549367.5793326,"caller":"provisioning/main.go:350","msg":"unable to create webhookv1alpha2 ApigeeEnvironmentfailed to create apigee API client google: could not find default credentials. See https://developers.google.com/accounts/docs/application-default-credentials for more information.","stacktrace":"main.main\n\t/go/src/edge-internal/k8s-controllers/provisioning/main.go:350\nruntime.main\n\t/usr/local/go/src/runtime/proc.go:250"}`
This is a blocking issue for us and we cannot proceed with Installation
I believe the apigee-operator image is still expecting the default service account. Could you please fix this on high priority?
Attached are the modified rbac, controller deployment, and service account yamls
bug.zip
yamls which show custom Service Account
Issue
with version 1.8.1-preview of this code repo. I am running apigee hybrid on AWS EKS and on Cluster level we have disabled AutomountServiceAccountToken for default SA. This is as per Security best practices.
controller manager deployment and RBAC are expecting default service account and both the containers kube-rbac-proxy and manager failed to start because they are not able to mount a default service account.
Recommendation
The recommendation is to fix this by adding a separate ServiceAccount for the controller manager same as ingress-manager deployment and also to modify the controller manager rbac to refer to that service account instead of the default SA.
Fix:
In order to fix this issue, I created a separate Service Account named apigee-controller-manager and referred in controller Deployment via
serviceAccountName: apigee-controller-manager
attribute and also updated its rbac. Refer to the yamls attached. This fixed the issue with kube-rbac-proxy container it started running but the manager container is still not working. Below is the error I am getting from the manager container logsERROR
`kubectl logs apigee-controller-manager-6995d9876d-txjj6 -c manager -n apigee{"level":"error","ts":1667549367.5793326,"caller":"provisioning/main.go:350","msg":"unable to create webhookv1alpha2 ApigeeEnvironmentfailed to create apigee API client google: could not find default credentials. See https://developers.google.com/accounts/docs/application-default-credentials for more information.","stacktrace":"main.main\n\t/go/src/edge-internal/k8s-controllers/provisioning/main.go:350\nruntime.main\n\t/usr/local/go/src/runtime/proc.go:250"}`
This is a blocking issue for us and we cannot proceed with Installation
I believe the apigee-operator image is still expecting the default service account. Could you please fix this on high priority?
Attached are the modified rbac, controller deployment, and service account yamls bug.zip yamls which show custom Service Account