crossplane-contrib / provider-jet-aws

AWS Provider for Crossplane that is built with Terrajet.
https://crossplane.io
Apache License 2.0
37 stars 30 forks source link

Kubeconfig Secret has no data #229

Open andrzej-natzka opened 2 years ago

andrzej-natzka commented 2 years ago

What happened?

I set up eks cluster using example yaml manifests

apiVersion: iam.aws.jet.crossplane.io/v1alpha2
kind: Role
metadata:
  name: sample-eks-cluster
spec:
  forProvider:
    assumeRolePolicy: |
      {
        "Version": "2012-10-17",
        "Statement": [
          {
            "Effect": "Allow",
            "Principal": {
              "Service": "eks.amazonaws.com"
            },
            "Action": "sts:AssumeRole"
          }
        ]
      }
  providerConfigRef:
    name: aws-jet-provider
---
apiVersion: iam.aws.jet.crossplane.io/v1alpha2
kind: RolePolicyAttachment
metadata:
  name: sample-cluster-policy-1
spec:
  forProvider:
    policyArn: arn:aws:iam::aws:policy/AmazonEKSClusterPolicy
    roleRef:
      name: sample-eks-cluster
  providerConfigRef:
    name: aws-jet-provider
---
apiVersion: iam.aws.jet.crossplane.io/v1alpha2
kind: RolePolicyAttachment
metadata:
  name: sample-cluster-policy-2
spec:
  forProvider:
    policyArn: arn:aws:iam::aws:policy/AmazonEKSServicePolicy
    roleRef:
      name: sample-eks-cluster
  providerConfigRef:
    name: aws-jet-provider
---
apiVersion: ec2.aws.jet.crossplane.io/v1alpha2
kind: VPC
metadata:
  name: sample-vpc
spec:
  forProvider:
    region: eu-west-1
    cidrBlock: 10.0.0.0/16
    tags:
      Name: sample-vpc
  providerConfigRef:
    name: aws-jet-provider
---
apiVersion: ec2.aws.jet.crossplane.io/v1alpha2
kind: Subnet
metadata:
  name: sample-subnet1
spec:
  forProvider:
    region: eu-west-1
    availabilityZone: eu-west-1b
    vpcIdRef:
      name: sample-vpc
    cidrBlock: 10.0.0.0/24
    tags:
      Name: eks-snet
  providerConfigRef:
    name: aws-jet-provider
---
apiVersion: ec2.aws.jet.crossplane.io/v1alpha2
kind: Subnet
metadata:
  name: sample-subnet2
spec:
  forProvider:
    region: eu-west-1
    availabilityZone: eu-west-1a
    vpcIdRef:
      name: sample-vpc
    cidrBlock: 10.0.1.0/24
    tags:
      Name: eks-snet
  providerConfigRef:
    name: aws-jet-provider
---
apiVersion: eks.aws.jet.crossplane.io/v1alpha2
kind: Cluster
metadata:
  name: sample-eks-cluster
  labels:
    example: "true"
spec:
  forProvider:
    region: eu-west-1
    version: "1.21"
    roleArnRef:
      name: sample-eks-cluster
    vpcConfig:
      - subnetIdRefs:
          - name: sample-subnet1
          - name: sample-subnet2
  providerConfigRef:
    name: aws-jet-provider
  writeConnectionSecretToRef:
    name: cluster-conn
    namespace: default

EKS cluster and all resources have been created successfully. In default namespace I see secret:

k get secret -n default
NAME                        TYPE                                  DATA   AGE
cluster-conn                connection.crossplane.io/v1alpha1     0      30m

Secret ymal manifest:

k get secret cluster-conn -o yaml
apiVersion: v1
kind: Secret
metadata:
  creationTimestamp: "2022-08-18T14:32:33Z"
  name: cluster-conn
  namespace: default
  ownerReferences:
  - apiVersion: eks.aws.jet.crossplane.io/v1alpha2
    controller: true
    kind: Cluster
    name: sample-eks-cluster
    uid: 5a0abfb2-7a6e-459e-92a6-a6ce4adba8b1
  resourceVersion: "31815318"
  uid: 25eb1c83-8b04-493a-807b-7356f2352a86
type: connection.crossplane.io/v1alpha1

There is no kubeconfig data there. I checked AWS classic provider all works fine there.

How can we reproduce it?

Just run manifest file I copied at the beginning of my post. Check afterwards secret in default namespace.

What environment did it happen in?

Crossplane version: crossplane-1.9.0 Provider: aws-jet-provider True True crossplane/provider-jet-aws:main 112m

haarchri commented 2 years ago

think we need to build custom connection details because terraform resource will not publish by default a kubeconfig

The process in terraform looks like this:

https://github.com/terraform-aws-modules/terraform-aws-eks/blob/v13.2.1/templates/kubeconfig.tpl

locals {
  kubeconfig = templatefile("templates/kubeconfig.tpl", {
    kubeconfig_name                   = local.kubeconfig_name
    endpoint                          = aws_eks_cluster.example.endpoint
    cluster_auth_base64               = aws_eks_cluster.example.certificate_authority[0].data
    aws_authenticator_command         = "aws-iam-authenticator"
    aws_authenticator_command_args    = ["token", "-i", aws_eks_cluster.example.name]
    aws_authenticator_additional_args = []
    aws_authenticator_env_variables   = {}
  })
}

output "kubeconfig" { value = local.kubeconfig }
haarchri commented 1 year ago

looks like upbound/provider-aws has fixed this issue https://github.com/upbound/provider-aws/blob/75f320d/internal/controller/eks/clusterauth/controller.go#L145