Closed Gl1TcH-1n-Th3-M4tR1x closed 2 years ago
Fixed the problem by modifying the inventory file to:
#
# Common variables for the inventory
#
[all:vars]
#
# Internal variables
#
ansible_python_interpreter=/usr/bin/python3
ansible_ssh_pipelining=True
ansible_ssh_common_args='-o UserKnownHostsFile=/dev/null -o StrictHostKeyChecking=accept-new'
#
# Inventory variables
#
#
# The default for the cluster name is {{ kubeinit_cluster_distro + 'cluster' }}
# You can override this by setting a specific value in kubeinit_inventory_cluster_name
# kubeinit_inventory_cluster_name=mycluster
kubeinit_inventory_cluster_domain=kubeinit.local
kubeinit_inventory_network_name=kimgtnet0
kubeinit_inventory_network=10.0.0.0/24
kubeinit_inventory_gateway_offset=-2
kubeinit_inventory_nameserver_offset=-3
kubeinit_inventory_dhcp_start_offset=1
kubeinit_inventory_dhcp_end_offset=-4
kubeinit_inventory_controller_name_pattern=controller-%02d
kubeinit_inventory_compute_name_pattern=compute-%02d
kubeinit_inventory_post_deployment_services="none"
#
# Cluster definitions
#
# The networks you will use for your kubeinit clusters. The network name will be used
# to create a libvirt network for the cluster guest vms. The network cidr will set
# the range of addresses reserved for the cluster nodes. The gateway offset will be
# used to select the gateway address within the range, a negative offset starts at the
# end of the range, so for network=10.0.0.0/24, gateway_offset=-2 will select 10.0.0.254
# and gateway_offset=1 will select 10.0.0.1 as the address. Other offset attributes
# follow the same convention.
[kubeinit_networks]
# kimgtnet0 network=10.0.0.0/24 gateway_offset=-2 nameserver_offset=-3 dhcp_start_offset=1 dhcp_end_offset=-4
# kimgtnet1 network=10.0.1.0/24 gateway_offset=-2 nameserver_offset=-3 dhcp_start_offset=1 dhcp_end_offset=-4
# The clusters you are deploying using kubeinit. If there are no clusters defined here
# then kubeinit will assume you are only using one cluster at a time and will use the
# network defined by kubeinit_inventory_network.
[kubeinit_clusters]
# cluster0 network_name=kimgtnet0
# cluster1 network_name=kimgtnet1
#
# If variables are defined in this section, they will have precedence when setting
# kubeinit_inventory_post_deployment_services and kubeinit_inventory_network_name
#
# clusterXXX network_name=kimgtnetXXX post_deployment_services="none"
# clusterYYY network_name=kimgtnetYYY post_deployment_services="none"
#
# Hosts definitions
#
# The cluster's guest machines can be distributed across mutiple hosts. By default they
# will be deployed in the first Hypervisor. These hypervisors are activated and used
# depending on how they are referenced in the kubeinit spec string.
[hypervisor_hosts]
hypervisor-01 ansible_host=nyctea
hypervisor-02 ansible_host=tyto
# The inventory will have one host identified as the bastion host. By default, this role will
# be assumed by the first hypervisor, which is the same behavior as the first commented out
# line. The second commented out line would set the second hypervisor to be the bastion host.
# The final commented out line would set the bastion host to be a different host that is not
# being used as a hypervisor for the guests VMs for the clusters using this inventory.
[bastion_host]
# bastion target=hypervisor-01
# bastion target=hypervisor-02
# bastion ansible_host=bastion
# The inventory will have one host identified as the ovn-central host. By default, this role
# will be assumed by the first hypervisor, which is the same behavior as the first commented
# out line. The second commented out line would set the second hypervisor to be the ovn-central
# host.
[ovn_central_host]
# ovn-central target=hypervisor-01
# ovn-central target=hypervisor-02
#
# Cluster node definitions
#
# Controller, compute, and extra nodes can be configured as virtual machines or using the
# manually provisioned baremetal machines for the deployment.
# Only use an odd number configuration, this means enabling only 1, 3, or 5 controller nodes
# at a time.
[controller_nodes:vars]
os={'cdk': 'ubuntu', 'eks': 'centos', 'k8s': 'centos', 'kid': 'debian', 'okd': 'coreos', 'rke': 'ubuntu'}
disk=120G
ram=25165824
vcpus=8
maxvcpus=16
type=virtual
target_order=hypervisor-01
[controller_nodes]
[compute_nodes:vars]
os={'cdk': 'ubuntu', 'eks': 'centos', 'k8s': 'centos', 'kid': 'debian', 'okd': 'coreos', 'rke': 'ubuntu'}
disk=120G
ram=25165824
vcpus=8
maxvcpus=16
type=virtual
target_order="hypervisor-02,hypervisor-01"
[compute_nodes]
[extra_nodes:vars]
os={'cdk': 'ubuntu', 'okd': 'coreos'}
disk=20G
ram={'cdk': '8388608', 'okd': '16777216'}
vcpus=8
maxvcpus=16
type=virtual
target_order="hypervisor-02,hypervisor-01"
[extra_nodes]
juju-controller distro=cdk
bootstrap distro=okd
# Service nodes are a set of service containers sharing the same pod network.
# There is an implicit 'provision' service container which will use a base os
# container image based upon the service_nodes:vars os attribute.
[service_nodes:vars]
os={'cdk': 'ubuntu', 'eks': 'centos', 'k8s': 'centos', 'kid': 'debian', 'okd': 'centos', 'rke': 'ubuntu'}
target_order=hypervisor-01
[service_nodes]
service services="bind,dnsmasq,haproxy,apache,registry" # nexus
And the deployment command to:
podman run --rm -it -v ~/.ssh/id_rsa:/root/.ssh/id_rsa:z -v ~/.ssh/id_rsa.pub:/root/.ssh/id_rsa.pub:z -v ~/.ssh/config:/root/.ssh/cong:z -v ./kubeinit/inventory:/kubeinit/kubeinit/inventory quay.io/kubeinit/kubeinit:$TAG -vvv --user root -e kubeinit_spec=okd-libvirt-1-3-1 -i ./kubeinit/inventory ./kubeinit/playbook.yml
After deploying with the Quay image, changes to the Inventory file were not taken into consideration:
Changed the HDD size for master and compute nodes to 120GB and 25GB memory for the compute nodes with:
Here the command line used to deploy the cluster:
After completing the deployment, the controller node has 25GB HDD and the worker nodes have 8GB RAM and 30GB HDD.