This workflow usually works fine, when creating Kubernetes resources in Terraform using the Kubernetes provider as with any other cluster. If the Terraform manifests include resources of the type helm_release however, it appears that I am running into an issue with permissions for the github-runner service account.
This is the error message:
Error: rendered manifests contain a resource that already exists. Unable to continue with install: could not get information about the resource ClusterRole "robusta-forwarder-cluster-role" in namespace "": clusterroles.rbac.authorization.k8s.io "robusta-forwarder-cluster-role" is forbidden: User "system:serviceaccount:actions-runner-system:actions-runner" cannot get resource "clusterroles" in API group "rbac.authorization.k8s.io" at the cluster scope
I don't understand most parts of it because:
since the vcluster has just been created, it is unlikely that there is already anything existing that would conflict with the resources to be created
why would I receive an error message from the vcluster API regarding a service account that does not exist in the vcluster (but only on actual Kubernetes level)
in any case, I tried to add the clusterRole and required permissions on both, the EKS and the vcluster levels, but the error remains the same
applying Kubernetes resources like ClusterRole, ClusterRoleBinding and ServiceAccount with Terraform to a loft CLI created vcluster is not a problem at all and works as expected
Appreciate any hints where I might be doing things wrong or why I have a false perception of the issue.
I have this setup:
This workflow usually works fine, when creating Kubernetes resources in Terraform using the Kubernetes provider as with any other cluster. If the Terraform manifests include resources of the type
helm_release
however, it appears that I am running into an issue with permissions for the github-runner service account.This is the error message:
I don't understand most parts of it because:
ClusterRole
,ClusterRoleBinding
andServiceAccount
with Terraform to a loft CLI created vcluster is not a problem at all and works as expectedAppreciate any hints where I might be doing things wrong or why I have a false perception of the issue.