Closed rmb938 closed 3 years ago
This issue has been inactive for 60 days. If the issue is still relevant please comment to re-activate the issue. If no action is taken within 7 days, the issue will be marked closed.
Still needs looking into. Looking at the code, I didn't see anything that could cause the described behaviour, but I'd like to build a proper test and validate this first-hand.
Oops sorry for not updating this issue. I did dig into it a bit and it seems like it's a kubernetes issue not wave. I was able to replicate it by hand. From looking around it does seem like this is expected K8s behaver, a bit weird though...
If you could provide some minimal Kubernetes yaml or a helm chart that could be used to reproduce this in kind
or minikube
, that would save me some time.
Will do
Owner Configmap:
apiVersion: v1
kind: ConfigMap
metadata:
creationTimestamp: "2021-01-23T16:29:35Z"
name: ownerconfig
namespace: default
resourceVersion: "1018"
selfLink: /api/v1/namespaces/default/configmaps/ownerconfig
uid: 5cef8d75-be36-4ff9-bd3f-6fee9f0b0187
App Configmap (manually set the owner reference to be correct)
apiVersion: v1
data:
foo: bar
kind: ConfigMap
metadata:
creationTimestamp: "2021-01-23T16:30:38Z"
name: appconfig
namespace: default
ownerReferences:
- apiVersion: v1
blockOwnerDeletion: true
controller: true
kind: ConfigMap
name: ownerconfig
uid: 5cef8d75-be36-4ff9-bd3f-6fee9f0b0187
resourceVersion: "1373"
selfLink: /api/v1/namespaces/default/configmaps/appconfig
uid: 44b71cef-26b7-48ea-adfe-f41acab2675c
Deployment:
apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx-deployment
labels:
app: nginx
annotations:
wave.pusher.com/update-on-config-change: "true"
spec:
replicas: 1
selector:
matchLabels:
app: nginx
template:
metadata:
labels:
app: nginx
spec:
containers:
- name: nginx
image: nginx:1.14.2
ports:
- containerPort: 80
env:
- name: FOO
valueFrom:
configMapKeyRef:
name: appconfig
key: foo
Once the deployment is created the appconfig configmap looks like this as expected:
apiVersion: v1
data:
foo: bar
kind: ConfigMap
metadata:
creationTimestamp: "2021-01-23T16:30:38Z"
name: appconfig
namespace: default
ownerReferences:
- apiVersion: v1
blockOwnerDeletion: true
controller: true
kind: ConfigMap
name: ownerconfig
uid: 5cef8d75-be36-4ff9-bd3f-6fee9f0b0187
- apiVersion: apps/v1
blockOwnerDeletion: true
controller: false
kind: Deployment
name: nginx-deployment
uid: 08cf1936-230d-43bb-ae30-93a002740f6a
resourceVersion: "2256"
selfLink: /api/v1/namespaces/default/configmaps/appconfig
uid: 44b71cef-26b7-48ea-adfe-f41acab2675c
Now delete the ownerconfig configmap and the ownerference on the appconfigmap will remove it's owner reference for it instead of being deleted
apiVersion: v1
data:
foo: bar
kind: ConfigMap
metadata:
creationTimestamp: "2021-01-23T16:30:38Z"
name: appconfig
namespace: default
ownerReferences:
- apiVersion: apps/v1
blockOwnerDeletion: true
controller: false
kind: Deployment
name: nginx-deployment
uid: 95f7a01c-6511-4a5c-ba42-29a3509a263d
resourceVersion: "3895"
selfLink: /api/v1/namespaces/default/configmaps/appconfig
uid: 44b71cef-26b7-48ea-adfe-f41acab2675c
This is why I think it's a K8s issue. I would expect when the ownerconfig configmap is deleted that the appconfig configmap will delete itself instead of just removing the owner reference. I don't think it's something wave can prevent.
Then when the deployment is deleted the appconfig configmap is left without any owner references and never cleaned up.
That looks like expected behaviour to me.
Kubernetes Garbage Collection will only remove an object entirely if the object in its final ownerReference
is removed.
As long as there is at least one valid owner, the object is considered "in use" and thus remains.
I'm going to close this issue for now. If there are any further questions or concerns, please feel free to re-open.
If I have a statefulset that references a configmap or secret with existing owner references. Wave adds it's own owner references as expected however when the statefulset is delete wave removes all owner references.