lwolf / kube-cleanup-operator

Kubernetes Operator to automatically delete completed Jobs and their Pods
MIT License
498 stars 109 forks source link

Cleanup Operator tries to remove pod twice #31

Closed gkrizek closed 5 years ago

gkrizek commented 5 years ago

I'm following your example in the README. I can get cleanup-operator running just fine, but I'm seeing a weird problem where it seems like it's trying to remove the job and pod twice.

After the cleanup-operator was running, I simply ran:

kubectl create -f https://k8s.io/examples/controllers/job.yaml

After it completes, I see this in the log:

2019/08/30 15:08:18 Controller started...
2019/08/30 15:08:18 Listening for changes...

2019/08/30 15:40:44 Deleting pod 'pi-xrm7p'
2019/08/30 15:40:44 Deleting job 'pi'
2019/08/30 15:40:44 Deleting pod 'pi-xrm7p'
2019/08/30 15:40:44 failed to delete job pi: pods "pi-xrm7p" not found
2019/08/30 15:40:44 Deleting job 'pi'

I can confirm there is only 1 job and 1 pod so I have no idea why it would be trying twice like that.

I'm running on AWS EKS with Kube 1.12. Thanks!

lwolf commented 5 years ago

Hi, I saw that behaviour as well. I assume that it deletes the job ( which deletes the pod) and then it tries to delete the pod.

gkrizek commented 5 years ago

Ok, I seemed to be having issues where deleting jobs doesn't always delete pods. But i think that's an issue with my Kubernetes version. I'll try to dig into this more when I have time to confirm what's going on.

lwolf commented 5 years ago

there should be no more messages in the logs about failed to delete **. But duplicate lines is expected behaviour, since operator runs reconcile function on each event and API server sends events on each change of the state