$ kubectl get -n eunomia-operator endpoints/eunomia-operator -o jsonpath='{.subsets[*].addresses[*].targetRef.name}' | xargs -I% kubectl exec -n eunomia-operator % -- curl -sS localhost:8383/metrics | grep eunomia_build_info
# HELP eunomia_build_info A metric with a constant '1' value labeled by version from which eunomia was built, and other useful build information.
# TYPE eunomia_build_info gauge
eunomia_build_info{branch="verify-277",builddate="20200224-17:11:26",gitsha1="2b3281ad61daca36834f8705d48498411b6d3bdd",goversion="go1.13.7",operatorsdk="v0.8.1",version="v0.1.4-dev"} 1
eunomia version: 0.1.4-dev
Does this issue reproduce with the latest release?
yes
What operating system and processor architecture are you using (kubectl version)?
kubectl version Output
$ oc version
oc v3.11.0+0cbc58b
kubernetes v1.11.0+d4cacc0
features: Basic-Auth
Server https://192.168.64.4:8443
kubernetes v1.11.0+d4cacc0
What did you do?
$ (eval "$(minishift oc-env)"; oc login -u system:admin; oc new-project eunomia-hello-world-demo || oc project eunomia-hello-world-demo; sed 's/eunomia-runner/eunomia-operator/g' < ./examples/hello-world-helm/service_account_runner.yaml | oc apply -f -)
Logged into "https://192.168.64.4:8443" as "system:admin" using existing credentials.
You have access to the following projects and can switch between them with 'oc project <projectname>':
default
* eunomia-hello-world-demo
eunomia-hello-world-yaml-demo
eunomia-operator
kube-dns
kube-proxy
kube-public
kube-system
ocp-conflict-test-ns
openshift
openshift-apiserver
openshift-controller-manager
openshift-core-operators
openshift-infra
openshift-node
openshift-service-cert-signer
openshift-web-console
Using project "eunomia-hello-world-demo".
Error from server (AlreadyExists): project.project.openshift.io "eunomia-hello-world-demo" already exists
Already on project "eunomia-hello-world-demo" on server "https://192.168.64.4:8443".
serviceaccount/eunomia-operator created
clusterrolebinding.rbac.authorization.k8s.io/eunomia-demo-runner-namespace-admin configured
clusterrolebinding.rbac.authorization.k8s.io/eunomia-demo-runner-cluster-admin configured
$ (eval "$(minishift oc-env)"; oc apply -f ./test/e2e/testdata/simple/ocp-hello-cr1.yaml )
gitopsconfig.eunomia.kohls.io/hello-world-ocp created
$ k get all -n eunomia-hello-world-demo
NAME READY STATUS RESTARTS AGE
pod/gitopsconfig-hello-world-ocp-eprkqr-qzn25 0/1 Completed 0 2m
pod/helloworld 1/1 Running 0 1m
NAME DESIRED SUCCESSFUL AGE
job.batch/gitopsconfig-hello-world-ocp-eprkqr 1 1 2m
What did you expect to see?
I expected eunomia labels to be applied on the resource, so that it could be automatically deleted by eunomia, kinda like below:
$ k get pod/helloworld -n eunomia-hello-world-demo --show-labels
NAME READY STATUS RESTARTS AGE LABELS
helloworld 1/1 Running 0 1m app=helloworld,gitopsconfig.eunomia.kohls.io/owned=...,gitopsconfig.eunomia.kohls.io/applied=...
What did you see instead?
The resources generated by oc process from OpenShift Template resources do not have eunomia labels applied, and thus eunomia won't delete them automatically when they disappear from git, or when GitOpsConfig CR is deleted:
$ k get pod/helloworld -n eunomia-hello-world-demo --show-labels
NAME READY STATUS RESTARTS AGE LABELS
helloworld 1/1 Running 0 1m app=helloworld
a) add -l gitopsconfig.eunomia.kohls.io/owned=...,gitopsconfig.eunomia.kohls.io/applied=... line to ocp-template processor
this would require moving the generation of the label values from resourceManager.sh to an earlier phase (discoverEnvironment.sh?) so they are available both in processTemplates.sh and resourceManager.sh
b) or, completely redesign the way how automatic deletion of resources is handled to somehow avoid this problem (for example, retrieve and store UIDs of all resources applied by eunomia into some configmap, to later know what to delete?)
What version of eunomia are you using?
kubectl exec $EUNOMIA_POD curl localhost:8383/metrics
Outputeunomia version: 0.1.4-dev
Does this issue reproduce with the latest release?
yes
What operating system and processor architecture are you using (
kubectl version
)?kubectl version
OutputWhat did you do?
What did you expect to see?
I expected eunomia labels to be applied on the resource, so that it could be automatically deleted by eunomia, kinda like below:
What did you see instead?
The resources generated by
oc process
from OpenShiftTemplate
resources do not have eunomia labels applied, and thus eunomia won't delete them automatically when they disappear from git, or when GitOpsConfig CR is deleted: