Closed errordeveloper closed 3 years ago
This is quite likely to do with deprecation of apiextensions.k8s.io/v1beta1
in Kubernetes as OpenShift 4.9 is based on 1.22.
Interestingly enough this project originally started with v1, and v1beta1 had to be used due to issue with the certification system (see #23). It turns out that v1 was already supported in OpenShift 4.4.
This is partly blocked by https://bugzilla.redhat.com/show_bug.cgi?id=1997050, but without_kube_proxy: true
gets Cilium running.
CRD version was addressed in https://github.com/cilium/cilium-olm/commit/47ff4dfd32ef7c17fe95ed6c4feff8a6bb937218, and the certification system should accept v1 now.
There was a minor issue with ports, seemingly port 8080 was taken by another process https://github.com/cilium/cilium-olm/commit/d8c642682fb33a83599a2b3e477924b4cb1d7759.
This has just been merged.
I think this can be closed pending it's validated against a 4.9 build that include CNO fixes.
@vrutkovs https://github.com/openshift/cluster-network-operator/pull/1188 was merged on 01/09/21, and 4.9.0-fc.1 appears to be released on that day also. How can I check which version of CNO was shipped in 4.9.0-fc.0 (or a latest nightly build) without deploying a cluster?
How can I check which version of CNO was shipped in 4.9.0-fc.0 (or a latest nightly build) without deploying a cluster?
$ oc adm release info quay.io/openshift-release-dev/ocp-release:4.9.0-rc.0-x86_64 --commit-urls | grep cluster-network-operator
cluster-network-operator https://github.com/openshift/cluster-network-operator/commit/8437b077d5700f3b4f484f34717c939faf90c5e2
Yup, its in rc.0 (and latest nightlies too)
@vrutkovs ah, I didn't realise rc.0 was out already, thanks very much!
I've started deploying 4.9.0-rc.0 just now, and can see that all issues discussed here had been fixed - control plane nodes and Cilium are all running happily.
$ kubectl --kubeconfig tfc-scripts/test-490rc0-oss1103-1.kubeconfig get nodes
NAME STATUS ROLES AGE VERSION
ip-10-0-50-199.eu-west-1.compute.internal Ready master 60s v1.22.0-rc.0+75ee307
ip-10-0-61-143.eu-west-1.compute.internal Ready master 88s v1.22.0-rc.0+75ee307
ip-10-0-61-169.eu-west-1.compute.internal Ready master 88s v1.22.0-rc.0+75ee307
$ kubectl --kubeconfig tfc-scripts/test-490rc0-oss1103-1.kubeconfig get pods -n cilium
NAME READY STATUS RESTARTS AGE
cilium-88xxl 1/1 Running 0 56s
cilium-j7k2v 1/1 Running 0 56s
cilium-olm-66b748845-hslfc 1/1 Running 0 4m22s
cilium-operator-588fb5ff4b-2lt5t 1/1 Running 0 56s
cilium-operator-588fb5ff4b-46qm5 1/1 Running 0 56s
cilium-xpg5f 1/1 Running 0 56s
$ kubectl --kubeconfig tfc-scripts/test-490rc0-oss1103-1.kubeconfig get pods -n openshift-network-operator
NAME READY STATUS RESTARTS AGE
network-operator-f889465bb-zs7wl 1/1 Running 0 5m53s
$
If something else happens and bootstrap doesn't complete in my test, I'll open another issue to dig into that.
I definetly worked:
$ kubectl --kubeconfig tfc-scripts/test-490rc0-oss1103-1.kubeconfig get clusterversion
NAME VERSION AVAILABLE PROGRESSING SINCE STATUS
version 4.9.0-rc.0 True False 86m Cluster version is 4.9.0-rc.0
$ kubectl --kubeconfig tfc-scripts/test-490rc0-oss1103-1.kubeconfig get pods -n cilium
NAME READY STATUS RESTARTS AGE
cilium-675hm 1/1 Running 0 106m
cilium-88xxl 1/1 Running 0 114m
cilium-bk6ls 1/1 Running 0 103m
cilium-j7k2v 1/1 Running 0 114m
cilium-lbl6h 1/1 Running 0 106m
cilium-olm-7dff7dd48d-tz47h 1/1 Running 2 (105m ago) 112m
cilium-operator-588fb5ff4b-2lt5t 1/1 Running 1 (111m ago) 114m
cilium-operator-588fb5ff4b-46qm5 1/1 Running 1 (105m ago) 114m
cilium-xpg5f 1/1 Running 0 114m
$
(cilium-operator
and cilium-olm
restarts are known issues during early bootstrap stage).
It appears that Cilium is not working on 4.9 development builds (see https://github.com/openshift/release/pull/18681#issuecomment-903672655).
There are 4.9.0-fc.0 binaries already available, and it should be easy to test this also with https://github.com/cilium/openshift-terraform-upi/commit/c87d64c7b011480cd3940a1b00743f151b38d111.