ansible-collections / community.aws

Ansible Collection for Community AWS
GNU General Public License v3.0
187 stars 396 forks source link

eks_cluster - Feature request: add managed nodes option #594

Open hectoralicea opened 3 years ago

hectoralicea commented 3 years ago
SUMMARY

aws_eks_cluster builds eks cluster with no nodes, therefore all helm installs fails.

hector$ kc get nodes -A
No resources found
ISSUE TYPE
COMPONENT NAME

community.aws.aws_eks_cluster – Manage Elastic Kubernetes Service Clusters

ANSIBLE VERSION
hector$ ansible --version
ansible 2.9.11
  config file = /Users/hector/gitrepos/acr-ansible-eks/ansible.cfg
  configured module search path = ['/Users/hector/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
  ansible python module location = /usr/local/lib/python3.8/site-packages/ansible
  executable location = /usr/local/bin/ansible
  python version = 3.8.9 (default, Apr  3 2021, 01:50:09) [Clang 12.0.0 (clang-1200.0.32.29)]
CONFIGURATION
DEFAULT_HOST_LIST(/Users/hector/gitrepos/acr-ansible-eks/ansible.cfg) = ['/Users/hector/gitrepos/acr-ansible-eks/environments/ha.aws_eks.yml']
DEFAULT_STDOUT_CALLBACK(/Users/hector/gitrepos/acr-ansible-eks/ansible.cfg) = yaml
INTERPRETER_PYTHON(/Users/hector/gitrepos/acr-ansible-eks/ansible.cfg) = /usr/local/opt/python/libexec/bin/python
OS / ENVIRONMENT

Running from MacOS. Target is just EKS cluster

STEPS TO REPRODUCE

Following is the ansible tasks which creates the cluster

- name: Create an EKS cluster
  community.aws.aws_eks_cluster:
    state: present
    name: "{{ eks_cluster_name }}"
    #version: 1.19
    role_arn: "{{ eks_role_arn }}"
    region: "{{ eks_region }}"
    subnets: "{{ eks_subnets }}"
    security_groups: "{{ eks_security_groups }}"
    wait: yes
  register: eks_facts
EXPECTED RESULTS

kubectl get nodes -A should return at least one node

ACTUAL RESULTS

kubectl get nodes -A returns No resources found

Following is the get events

NAMESPACE     LAST SEEN   TYPE      REASON                 OBJECT                                  MESSAGE
cicd          13s         Warning   FailedScheduling       pod/jenkins-0                           no nodes available to schedule pods
cicd          10m         Normal    WaitForFirstConsumer   persistentvolumeclaim/jenkins           waiting for first consumer to be created before binding
cicd          10m         Normal    SuccessfulCreate       statefulset/jenkins                     create Pod jenkins-0 in StatefulSet jenkins successful
cicd          44s         Normal    WaitForPodScheduled    persistentvolumeclaim/jenkins           waiting for pod jenkins-0 to be scheduled
kube-system   43s         Warning   FailedScheduling       pod/coredns-56b458df85-hl49d            no nodes available to schedule pods
kube-system   13s         Warning   FailedScheduling       pod/coredns-56b458df85-x7xcv            no nodes available to schedule pods
kube-system   14m         Normal    SuccessfulCreate       replicaset/coredns-56b458df85           Created pod: coredns-56b458df85-hl49d
kube-system   14m         Normal    SuccessfulCreate       replicaset/coredns-56b458df85           Created pod: coredns-56b458df85-x7xcv
kube-system   14m         Normal    ScalingReplicaSet      deployment/coredns                      Scaled up replica set coredns-56b458df85 to 2
kube-system   14m         Normal    LeaderElection         configmap/cp-vpc-resource-controller    ip-172-16-187-126.us-east-2.compute.internal_98d67022-b3c9-4525-a0fb-6eb0a0ae58bf became leader
kube-system   14m         Normal    LeaderElection         configmap/eks-certificates-controller   ip-172-16-187-126.us-east-2.compute.internal became leader
kube-system   14m         Normal    LeaderElection         endpoints/kube-controller-manager       ip-172-16-187-126.us-east-2.compute.internal_2f20a37d-002d-481b-ac51-97c66c70a4d1 became leader
kube-system   14m         Normal    LeaderElection         lease/kube-controller-manager           ip-172-16-187-126.us-east-2.compute.internal_2f20a37d-002d-481b-ac51-97c66c70a4d1 became leader
kube-system   14m         Normal    LeaderElection         endpoints/kube-scheduler                ip-172-16-187-126.us-east-2.compute.internal_5596de6b-fb66-41ea-9783-8f36295acb7d became leader
kube-system   14m         Normal    LeaderElection         lease/kube-scheduler                    ip-172-16-187-126.us-east-2.compute.internal_5596de6b-fb66-41ea-9783-8f36295acb7d became leader
ansibullbot commented 3 years ago

Files identified in the description: None

If these files are inaccurate, please update the component name section of the description or use the !component bot command.

click here for bot help

mszumilak commented 3 years ago

This does not seem to be a bug. EKS cluster in AWS is just a control plane, it does not include nodes itself. There are 2 types of nodes in EKS:

Those nodes have to be added manually. at this moment Ansible does not have module to run managed nodes.

In my opinion this issue is a feature request, not a bug report.

hectoralicea commented 3 years ago

Those nodes have to be added manually. at this moment Ansible does not have module to run managed nodes.

Ok, so basically this aws_eks_cluster ansible module is useless. I'll Revert to using the bash shell module invoking eksctl directly. Did not want to have to install eksctl in the jenkin slave, but I have no choice but to.