Open marcellodesales opened 4 years ago
t2.micro
problems - 0/1 nodes are available: 1 Too many pods.0/1 nodes are available: 1 Too many pods.
... Even though there's an autoscale group for the cluster... Not sure the reason... I changed to t2.medium
and resolved... $ kubectl get pods
NAME READY STATUS RESTARTS AGE
ingress-aws-alb-ingress-controller-66f95d8d-v9n6m 0/1 Pending 0 114s
☸️ kubectl@1.18.6 📛 kustomize@v3.8.1 🧾 terraform@v0.13.4
provider
⎈ default 🔐 eks_eks-ppd-super-cash-example-com
~/dev/github.com/k-mitevski/terraform-k8s/06_terraform_envs_customised/environments/ppd on master! ⌚ 13:53:13
$ kubectl describe pod ingress-aws-alb-ingress-controller-66f95d8d-v9n6m
Name: ingress-aws-alb-ingress-controller-66f95d8d-v9n6m
Namespace: default
Priority: 0
Node: <none>
Labels: app.kubernetes.io/instance=ingress
app.kubernetes.io/name=aws-alb-ingress-controller
pod-template-hash=66f95d8d
Annotations: kubernetes.io/psp: eks.privileged
Status: Pending
IP:
IPs: <none>
Controlled By: ReplicaSet/ingress-aws-alb-ingress-controller-66f95d8d
Containers:
aws-alb-ingress-controller:
Image: docker.io/amazon/aws-alb-ingress-controller:v1.1.8
Port: 10254/TCP
Host Port: 0/TCP
Args:
--cluster-name=eks-ppd-super-cash-example-com
--ingress-class=alb
Environment: <none>
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from ingress-aws-alb-ingress-controller-token-bgv6p (ro)
Conditions:
Type Status
PodScheduled False
Volumes:
ingress-aws-alb-ingress-controller-token-bgv6p:
Type: Secret (a volume populated by a Secret)
SecretName: ingress-aws-alb-ingress-controller-token-bgv6p
Optional: false
QoS Class: BestEffort
Node-Selectors: <none>
Tolerations: node.kubernetes.io/not-ready:NoExecute for 300s
node.kubernetes.io/unreachable:NoExecute for 300s
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Warning FailedScheduling 35s (x5 over 2m6s) default-scheduler 0/1 nodes are available: 1 Too many pods.
cluster autoscaler
Error: error creating EKS Node Group (eks-ppd-super-cash-example-com:eks-ppd-super-cash-example-com-first-grand-primate): InvalidParameterException: Subnets are not tagged with the required tag. Please tag all subnets with Key: kubernetes.io/cluster/eks-ppd-super-cash-example-com Value: shared
{
RespMetadata: {
StatusCode: 400,
RequestID: "249ff5ae-e506-40aa-a56f-ecc3441e856e"
},
ClusterName: "eks-ppd-super-cash-example-com",
Message_: "Subnets are not tagged with the required tag. Please tag all subnets with Key: kubernetes.io/cluster/eks-ppd-super-cash-example-com Value: shared",
NodegroupName: "eks-ppd-super-cash-example-com-first-grand-primate"
}
eks
, which is the name of the cluster... public_subnet_tags = {
"kubernetes.io/cluster/${var.cluster_name}" = "shared"
"kubernetes.io/role/elb" = "1"
}
private_subnet_tags = {
"kubernetes.io/cluster/${var.cluster_name}" = "shared"
"kubernetes.io/role/internal-elb" = "1"
}
public_subnet_tags = {
"kubernetes.io/cluster/eks-${local.env_domain}" = "shared"
"kubernetes.io/role/elb" = "1"
}
private_subnet_tags = {
"kubernetes.io/cluster/eks-${local.env_domain}" = "shared"
"kubernetes.io/role/internal-elb" = "1"
}
Hi there,
Thank you for the awesome tutorial at https://learnk8s.io/terraform-eks#you-can-provision-an-eks-cluster-with-terraform-too... Very useful as I was looking for an example to get different clusters per environment... I just need 2... really appreciated your work!!!
Just got an error creating the cluster using the step 6. I had updated a couple of properties shown below, but here's the error...
Error
I'm getting the following error:
At this point, I know I can ping amazonaws.com... But maybe we are missing a security group? The cluster got created...
Environment
Setup
Missing sep to install the authenticator
Other changes made to the original
1.17
to1.18
API server SSL certs might be wrong
unreachable
, I can see that the certs are incorrect...Thank you Marcello