HCL-TECH-SOFTWARE / connections-automation

Deployment and upgrade automation scripts for HCL Connections 7.0 based on Ansible
Apache License 2.0
17 stars 32 forks source link

problem with kubeadm-config.yml - apiVersion: kubeadm.k8s.io/v1beta2 #254

Closed MagnaFrisia90 closed 1 year ago

MagnaFrisia90 commented 1 year ago

When running the ansible playbook for component pack I am getting an error related to used API version :

TASK [setup-master-node : Initialize master for single node installation] **** fatal: [solcon15.solvito.cloud]: FAILED! => {"changed": true, "cmd": ["kubeadm", "init", "--config=/tmp/k8s_ansible/kubeadm-config.yaml"], "delta": "0:00:00.866046", "end": "2023-02-08 15:55:56.572283", "msg": "non-zero return code", "rc": 1, "start": "2023-02-08 15:55:55.706237", "stderr": "W0208 15:55:56.060972 1960 common.go:83] your configuration file uses a deprecated API spec: \"kubeadm.k8s.io/v1beta2\". Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.\nerror execution phase preflight: [preflight]

sabrina-yee commented 1 year ago

@MagnaFrisia90, the issue sounds very similar to #221 . Could you confirm the playbook you're running is the one described here?

MagnaFrisia90 commented 1 year ago

I am not sure what you mean by "here". It jumps directly to "Setting up Component Pack for HCL Connections 8 with its dependencies". But this is not what I am planning. I am running this command: "ansible-playbook -i environments/examples/cnx7/quick_start/inventory.ini playbooks/setup-component-pack-only.yml" (only want to install Kubernetes with your script) Inside "setup-component-pack-only.yml" I run

  • name: Setup Haproxy import_playbook: third_party/setup-haproxy.yml

  • name: Setup NFS import_playbook: third_party/setup-nfs.yml

  • name: Setup containerd import_playbook: third_party/setup-containerd.yml

  • name: Setup Kubernetes import_playbook: third_party/kubernetes/setup-kubernetes.yml

sabrina-yee commented 1 year ago

I'd suggest to clean up the env a bit first:

Then run setup-component-pack-infra-only.yml playbook which setup the infrastructure only.

MagnaFrisia90 commented 1 year ago

Same issue after manual removal of containerd & running cleanup-k8s

umeli commented 1 year ago

Which version of k8s are you installing ? I had some issues when I tried to install component pack 7 on k8s Version 1.22 and later.... I prefer 1.21.* for CP7

MagnaFrisia90 commented 1 year ago

I can not answer this question - I am only running "ansible-playbook -i environments/examples/cnx7/quick_start/inventory.ini playbooks/setup-component-pack-infra-only.yml" I assume it picks the correct version automatically Maybe this is what you asked for ?

[ansible@solcon13 kubernetes]$ cat kubernetes-install/vars/main.yml

__kubernetes_version: "{{ kubernetes_version | default('1.24.1') }}"

But this is nothing I should touch right ? Or can I just modify the value here ?

And how can I decide which Kubernetes version string to use ?

MagnaFrisia90 commented 1 year ago

Not sure what was the cause, but I have done following steps & it seems like the issue is solved:

I now ran into the next problem - which may be discussed in another topic:

TASK [setup-kubectl : Copy .kube to controller] * ***** * fatal: [solcon15.solvito.cloud]: FAILED! => {"changed": false, "cmd ": "/usr/bin/rsync --delay-updates -F --compress --archive --rsh=/u sr/bin/ssh -S none -o StrictHostKeyChecking=no -o UserKnownHostsFil e=/dev/null --rsync-path=sudo rsync --out-format=<>%i %n%L solcon15.solvito.cloud:/home/ansible/.kube /tmp/.kube_solcon15.sol vito.cloud", "msg": "Warning: Permanently added 'solcon15.solvito.c loud,172.16.10.107' (ECDSA) to the list of known hosts.\r\nprotocol version mismatch -- is your shell clean?\n(see the rsync man page for an explanation)\nrsync error: protocol incompatibility (code 2) at compat.c(178) [Receiver=3.1.2]\n", "rc": 2}

I still would like to know when to set which kubernetes version inside all.yml ?

umeli commented 1 year ago

maybe you need to set { kubernetes_version: "1.21.4" } in your all.yml

MagnaFrisia90 commented 1 year ago

Not sure what was the cause, but I have done following steps & it seems like the issue is solved:

recreated git repo
yum remove containerd
pkill ssh-agent
rm -Rf /tmp/*
rm -Rf /etc/containerd
ansible-playbook -i environments/examples/cnx7/quick_start/inventory.ini playbooks/hcl/cleanup/cleanup-k8s.yml
sabrina-yee commented 1 year ago

Good the hear the apiVersion issue is fixed.

The Kubernetes version, if not overridden in all.yml, is set in this file: https://github.com/HCL-TECH-SOFTWARE/connections-automation/blob/84883bf2986b05643857d6255cc2cea5fc92d97a/roles/third_party/kubernetes/kubernetes-install/vars/main.yml#L2

I'd recommend to use the default since that's what was tested.