Closed shankey28 closed 5 months ago
@shankey28 Hi Shashi, According to the config, you are creating an Unmanaged Nodegroup. Therefore the EKS console or AWS CLI won’t show the Nodegroup as it meant for Managed Nodegroup. if you want to view the unmanaged nodegroup you can do so using eksctl as well as you can view it in AWS Autoscaling Group Console.
Closing based on @punkwalker comment
What were you trying to accomplish?
Create EKS cluster with unmanaged node group:
What happened?
The command completed successfully. cluster created nodegroup created and confirmed in cloudformation console. However I do not see nodegroup in the EKS console under cluster -> node group
How to reproduce it?
eksctl get nodegroup --cluster nginx-ingress-controller-walkthrough --region us-east-1
CLUSTER NODEGROUP STATUS CREATED MIN SIZE MAX SIZE DESIRED CAPACITY INSTANCE TYPE IMAGE ID ASG NAME TYPE nginx-ingress-controller-walkthrough ng-1 CREATE_COMPLETE 2024-05-04T06:24:12Z 1 1 1 t2.large ami-057f49c54d950e56c eksctl-nginx-ingress-controller-walkthrough-nodegroup-ng-1-NodeGroup-ADGZ9kPcgnEl unmanaged
However when I run below command. I get error:
aws eks describe-nodegroup --cluster-name nginx-ingress-controller-walkthrough --nodegroup-name ng-1 --region us-east-1
An error occurred (ResourceNotFoundException) when calling the DescribeNodegroup operation: No node group found for name: ng-1.
If using a config file, include it here, removing any sensitive information! -->
apiVersion: eksctl.io/v1alpha5 kind: ClusterConfig
metadata: name: nginx-ingress-controller-walkthrough region: us-east-1
vpc: id: "vpc-4545reere" # (optional, must match VPC ID used for each subnet below) cidr: "10.42.0.0/16" # (optional, must match CIDR used by the given VPC) subnets:
must provide 'private' and/or 'public' subnets by availability zone as shown
nodeGroups:
Versions
eksctl version: 0.176.0 kubectl version: v1.24.7-eks-fb459a0 OS: linux