Closed richardcase closed 10 months ago
/triage accepted /priority critical-urgent
/assign
Additional error:
I1012 11:35:17.560593 1 recorder.go:104] "events: Failed to create managed RouteTable: RouteTableLimitExceeded: The maximum number of route tables has been reached.\n\tstatus code: 400, request id: f54b2239-642b-467e-adc5-166279cf98ff" type="Warning" object={"kind":"AWSManagedControlPlane","namespace":"eks-nodes-ji4qro","name":"eks-nodes-e2szrt-control-plane","uid":"8dddca77-4fb2-4415-aa47-efb68b2ba26b","apiVersion":"controlplane.cluster.x-k8s.io/v1beta2","resourceVersion":"20880"} reason="FailedCreateRouteTable"
Looks like we may need to increase the RT limits.
The limit is 200 route tables per vpc (soure). We don't create that many are part of creating a cluster....so something weird must be going on and perhaps on every reconciliation loop we create another RT. Investigating.
We are creating a new route table on every reconciliation loop :( And we hit the limit. Searching for "Created route table" in the logs of a failure yields 200+ entries...each with a different route table ID.
I haven't seen the 200+ route table issue again. I suspect now that the EKS cluster is creatign properly we aren't looping around again and creating route tables every reconcile. We can confirm this on another PR if we run the e2e tests.
/kind bug
What steps did you take and what happened:
Looking at testgrid the different EKS have been consistently failing since 3/4th October. For example:
Looking at some of the logs for the failures we see errors like this:
Looking at the logs some more for the reconciliation of the subnets we see this for mentioned subnet:
We should be passing subnet-0b94dc61d85f0193d and not eks-extresgc-384bwp-subnet-public-us-west-2a when creating the EKS cluster.
Looking at the code here we can see that it is passing the ID and not the value for ResourceID.
ResourceID is a new field introduced as part of #4474. We need to update the EKS code to use the REsourceID
What did you expect to happen:
The EKS e2e to not fail
Anything else you would like to add: [Miscellaneous information that will assist in solving the issue.]
Environment:
kubectl version
):/etc/os-release
):