kubernetes-sigs / cluster-api-provider-aws

Kubernetes Cluster API Provider AWS provides consistent deployment and day 2 operations of "self-managed" and EKS Kubernetes clusters on AWS.
http://cluster-api-aws.sigs.k8s.io/
Apache License 2.0
634 stars 560 forks source link

[E2E] EKS tests are failing #4574

Closed richardcase closed 10 months ago

richardcase commented 11 months ago

/kind bug

What steps did you take and what happened:

Looking at testgrid the different EKS have been consistently failing since 3/4th October. For example:

Looking at some of the logs for the failures we see errors like this:

I1011 03:29:27.276687       1 recorder.go:104] "events: Failed to initiate creation of a new EKS control plane: InvalidParameterException: The subnet ID 'eks-extresgc-384bwp-subnet-public-us-west-2a' does not exist (Service: AmazonEC2; Status Code: 400; Error Code: InvalidSubnetID.NotFound

Looking at the logs some more for the reconciliation of the subnets we see this for mentioned subnet:

    {
        "id": "eks-extresgc-384bwp-subnet-public-us-west-2a",
        "resourceID": "subnet-0b94dc61d85f0193d",
        "cidrBlock": "10.0.0.0/20",
        "availabilityZone": "us-west-2a",
        "isPublic": true,
        "routeTableId": "rtb-00a1670a0816f30c6",
        "natGatewayId": "nat-0f38df4fc916ed6c4",
        "tags": {
            "Name": "eks-extresgc-384bwp-subnet-public-us-west-2a",
            "kubernetes.io/cluster/eks-extresgc-o755gy_eks-extresgc-384bwp-control-plane": "shared",
            "kubernetes.io/role/elb": "1",
            "sigs.k8s.io/cluster-api-provider-aws/cluster/eks-extresgc-384bwp": "owned",
            "sigs.k8s.io/cluster-api-provider-aws/role": "public"
        }
    },

We should be passing subnet-0b94dc61d85f0193d and not eks-extresgc-384bwp-subnet-public-us-west-2a when creating the EKS cluster.

Looking at the code here we can see that it is passing the ID and not the value for ResourceID.

ResourceID is a new field introduced as part of #4474. We need to update the EKS code to use the REsourceID

What did you expect to happen:

The EKS e2e to not fail

Anything else you would like to add: [Miscellaneous information that will assist in solving the issue.]

Environment:

richardcase commented 11 months ago

/triage accepted /priority critical-urgent

richardcase commented 11 months ago

/assign

richardcase commented 11 months ago

Additional error:

I1012 11:35:17.560593       1 recorder.go:104] "events: Failed to create managed RouteTable: RouteTableLimitExceeded: The maximum number of route tables has been reached.\n\tstatus code: 400, request id: f54b2239-642b-467e-adc5-166279cf98ff" type="Warning" object={"kind":"AWSManagedControlPlane","namespace":"eks-nodes-ji4qro","name":"eks-nodes-e2szrt-control-plane","uid":"8dddca77-4fb2-4415-aa47-efb68b2ba26b","apiVersion":"controlplane.cluster.x-k8s.io/v1beta2","resourceVersion":"20880"} reason="FailedCreateRouteTable"

Looks like we may need to increase the RT limits.

richardcase commented 11 months ago

The limit is 200 route tables per vpc (soure). We don't create that many are part of creating a cluster....so something weird must be going on and perhaps on every reconciliation loop we create another RT. Investigating.

richardcase commented 11 months ago

We are creating a new route table on every reconciliation loop :( And we hit the limit. Searching for "Created route table" in the logs of a failure yields 200+ entries...each with a different route table ID.

richardcase commented 11 months ago

I haven't seen the 200+ route table issue again. I suspect now that the EKS cluster is creatign properly we aren't looping around again and creating route tables every reconcile. We can confirm this on another PR if we run the e2e tests.