Closed prabalsharma closed 3 years ago
proper service entry got created but then somehow don't exist after!
ℹ 🚀 gke ☸ prabal-gke 🎂 ./deploy.sh ➤ Configuring deployed GKE cluster…
configmap/cap-values created
{
"apiVersion": "v1",
"data": {
"domain": "prabal-gke.ci.kubecf.charmedquarks.me",
"garden-rootfs-driver": "overlay-xfs",
"platform": "gke",
"public-ip": "10.164.0.28",
"services": "lb"
},
"kind": "ConfigMap",
"metadata": {
"creationTimestamp": "2020-10-05T21:25:21Z",
"name": "cap-values",
"namespace": "kube-system",
"resourceVersion": "2406",
"selfLink": "/api/v1/namespaces/kube-system/configmaps/cap-values",
"uid": "3ea2c7fa-eb91-4f52-92e1-6509e044f30b"
}
}
clusterrolebinding.rbac.authorization.k8s.io/admin created
clusterrolebinding.rbac.authorization.k8s.io/uaaadmin created
clusterrolebinding.rbac.authorization.k8s.io/scfadmin created
Warning: kubectl apply should be used on resource created by either kubectl create --save-config or kubectl apply
clusterrole.rbac.authorization.k8s.io/cluster-admin configured
clusterrolebinding.rbac.authorization.k8s.io/kube-system:default created
✅ 🚀 gke ☸ prabal-gke 🎂 ./deploy.sh ➤ GKE cluster deployed
"services": "lb" can no longer be seen
[prabal:~/gop/src/github.com/SUSE/catapult] master(+1/-1)* 11m17s ± kubectl get configmap -n kube-system cap-values -o json
{
"apiVersion": "v1",
"data": {
"chart": "/tmp/build/0a06c48c/helm-chart.kubecf-chart/kubecf-2.2.3.tgz",
"domain": "prabal-gke.ci.kubecf.charmedquarks.me",
"garden-rootfs-driver": "overlay-xfs",
"platform": "gke",
"public-ip": "10.164.0.28"
},
"kind": "ConfigMap",
"metadata": {
"creationTimestamp": "2020-10-05T21:25:21Z",
"name": "cap-values",
"namespace": "kube-system",
"resourceVersion": "4736",
"selfLink": "/api/v1/namespaces/kube-system/configmaps/cap-values",
"uid": "3ea2c7fa-eb91-4f52-92e1-6509e044f30b"
}
}
gets over-written at this step for deploy job:
make -s -C modules/kubecf
/tmp/build/0a06c48c/catapult/buildprabal-gke /tmp/build/0a06c48c/catapult/modules/kubecf
[./clean.sh] [backend:gke] [cluster:prabal-gke] Loading
configmap/cap-values patched
[./clean.sh] [backend:gke] [cluster:prabal-gke] Cleaned up KubeCF from the k8s cluster
/tmp/build/0a06c48c/catapult/buildprabal-gke /tmp/build/0a06c48c/catapult/modules/kubecf
[./chart.sh] [backend:gke] [cluster:prabal-gke] Loading
[./chart.sh] [backend:gke] [cluster:prabal-gke] Grabbing chart from local file
'/tmp/build/0a06c48c/helm-chart.kubecf-chart/kubecf-2.2.3.tgz' -> 'chart'
what is going on here?
https://github.com/SUSE/catapult/blob/master/modules/kubecf/clean.sh#L59-L62
We are deleting services
data entry from config-map.
@viccuad do you know?
I think we should not call clean.sh
and leave it for explicit use by users.
@prabalsharma I think you hit the nail with https://github.com/SUSE/catapult/blob/master/modules/kubecf/clean.sh#L59-L62.
The bug was introduced by https://github.com/SUSE/catapult/pull/294, when we started using services
heavily, to accomodate for using the ingress deployments in cap.suse.de
services
is cluster environment information, such as domain
, and should not be deleted on kubecf-clean.
from CI: post deployment of kubecf
No dns annotation https://github.com/SUSE/catapult/blob/master/modules/kubecf/gen_config.sh#L44-L67