alvistack / vagrant-kubernetes

Vagrant Box Packaging for Kubernetes
Apache License 2.0
24 stars 5 forks source link

How to Deploy Cilium as Default CNI? #2

Closed templarfelix closed 1 year ago

templarfelix commented 1 year ago

How I enable cilium? https://github.com/alvistack/vagrant-kubernetes/blob/master/playbooks/60-kube_cilium-install.yml

hswong3i commented 1 year ago

Short Answer

# `vagrant up` the box and SSH into it
$ git clone -b develop https://github.com/alvistack/vagrant-kubernetes.git
$ cd vagrant-kubernetes
$ vagrant up
$ vagrant ssh

# Working as root
vagrant@kubernetes-1:~$ sudo su -

# Wait few minutes for /usr/local/bin/virt-sysprep-firstboot.sh get ready
root@kubernetes-1:~# kubectl get --raw='/readyz?verbose' | grep 'check passed'
readyz check passed

# At this point kubernetes should already self provisioned with Ansible,
# without any default CNI
root@kubernetes-1:~# kubectl get pod --all-namespaces 
NAMESPACE     NAME                                   READY   STATUS    RESTARTS   AGE
kube-system   coredns-565d847f94-nhls4               1/1     Running   0          113s
kube-system   coredns-565d847f94-tx8st               1/1     Running   0          113s
kube-system   kube-apiserver-kubernetes-1            1/1     Running   0          2m8s
kube-system   kube-controller-manager-kubernetes-1   1/1     Running   0          2m8s
kube-system   kube-proxy-fmg77                       1/1     Running   0          113s
kube-system   kube-scheduler-kubernetes-1            1/1     Running   0          2m8s

# Deploy cilium as CNI
root@kubernetes-1:~# apt update
root@kubernetes-1:~# ansible-playbook /etc/ansible/playbooks/60-kube_cilium-install.yml 

# Check again and cilium should now ready
root@kubernetes-1:~# kubectl get pod --all-namespaces 
NAMESPACE     NAME                                   READY   STATUS    RESTARTS   AGE
kube-system   cilium-252xk                           1/1     Running   0          3m
kube-system   cilium-node-init-9b9df                 1/1     Running   0          3m
kube-system   cilium-operator-5478d947cd-9dhnd       1/1     Running   0          3m
kube-system   coredns-565d847f94-b5m5h               1/1     Running   0          22s
kube-system   coredns-565d847f94-jfq5g               1/1     Running   0          37s
kube-system   kube-addon-manager-kubernetes-1        1/1     Running   0          3m8s
kube-system   kube-apiserver-kubernetes-1            1/1     Running   0          6m51s
kube-system   kube-controller-manager-kubernetes-1   1/1     Running   0          6m51s
kube-system   kube-proxy-fmg77                       1/1     Running   0          6m36s
kube-system   kube-scheduler-kubernetes-1            1/1     Running   0          6m51s
root@kubernetes-1:~# cilium status
    /¯¯\
 /¯¯\__/¯¯\    Cilium:         OK
 \__/¯¯\__/    Operator:       OK
 /¯¯\__/¯¯\    Hubble:         disabled
 \__/¯¯\__/    ClusterMesh:    disabled
    \__/

DaemonSet         cilium             Desired: 1, Ready: 1/1, Available: 1/1
Deployment        cilium-operator    Desired: 1, Ready: 1/1, Available: 1/1
Containers:       cilium             Running: 1
                  cilium-operator    Running: 1
Cluster Pods:     2/2 managed by Cilium
Image versions    cilium             quay.io/cilium/cilium:v1.12.4: 1
                  cilium-operator    quay.io/cilium/operator-generic:v1.12.4: 1

LT;DR;

When simply vagrant up

This box have a self provision script (see https://github.com/alvistack/vagrant-kubernetes/blob/master/playbooks/templates/usr/local/bin/virt-sysprep-firstboot.sh.j2) for above initialization with Ansible, execute by systemctl during finalize phase of system boot up.

But this mode with skip the deployment of CNI, so I could reuse it for testing both cilium/flannel/weave as below. It also only focus on single node AIO deployment.

When self test with sudo -E molecule test -s kubernetes-1.25-libvirt

Just after the box up and before above script execute, I stop the self provision (see https://github.com/alvistack/vagrant-kubernetes/blob/master/molecule/kubernetes-1.25-libvirt/molecule.yml#L34-L37).

During converge phase (see https://github.com/alvistack/vagrant-kubernetes/blob/master/molecule/default/converge.yml), it just run normal self provision steps, for multi node cluster deployment.

During verify phase (see https://github.com/alvistack/vagrant-kubernetes/blob/master/molecule/default/verify.yml), deploy flannel as default CNI, for running CNCF conformance test (see https://github.com/cncf/k8s-conformance/tree/master/v1.25/alvistack-vagrant-kubernetes#deploy-kubernetes).

When reuse as base box for testing other else CNI

It also stop the self provision at the beginning (see https://github.com/alvistack/ansible-role-kube_cilium/blob/master/molecule/kubernetes-1.25-libvirt/molecule.yml).

Therefore it could deploy its own CNI for testing during verify phase (see https://github.com/alvistack/ansible-role-kube_cilium/blob/master/molecule/default/verify.yml)

templarfelix commented 1 year ago

Thanks