Open andrzej-natzka opened 2 years ago
think we need to build custom connection details because terraform resource will not publish by default a kubeconfig
The process in terraform looks like this:
https://github.com/terraform-aws-modules/terraform-aws-eks/blob/v13.2.1/templates/kubeconfig.tpl
locals {
kubeconfig = templatefile("templates/kubeconfig.tpl", {
kubeconfig_name = local.kubeconfig_name
endpoint = aws_eks_cluster.example.endpoint
cluster_auth_base64 = aws_eks_cluster.example.certificate_authority[0].data
aws_authenticator_command = "aws-iam-authenticator"
aws_authenticator_command_args = ["token", "-i", aws_eks_cluster.example.name]
aws_authenticator_additional_args = []
aws_authenticator_env_variables = {}
})
}
output "kubeconfig" { value = local.kubeconfig }
looks like upbound/provider-aws has fixed this issue https://github.com/upbound/provider-aws/blob/75f320d/internal/controller/eks/clusterauth/controller.go#L145
What happened?
I set up eks cluster using example yaml manifests
EKS cluster and all resources have been created successfully. In default namespace I see secret:
Secret ymal manifest:
There is no kubeconfig data there. I checked AWS classic provider all works fine there.
How can we reproduce it?
Just run manifest file I copied at the beginning of my post. Check afterwards secret in default namespace.
What environment did it happen in?
Crossplane version: crossplane-1.9.0 Provider: aws-jet-provider True True crossplane/provider-jet-aws:main 112m