Closed sunib closed 1 year ago
Hi @sunib,
Could you please provide steps that you performed? Did you try to remove the secret first or you tried to remove both deployment and secret at the same time? Thanks in advance
Hi @igor-karpukhin: It's weeks ago that I started playing with Atlas. These Kubernetes manifests have been deleted for sure but I would not know in which order anymore. The log 'spamming' in the operator has been going on since then. I hoped that it would stop with my recent upgrade to 1.5.1. So restarting the pod also does not help.
The ugly thing of this is that I can't just go and delete the whole cluster for something like this. I also can't just delete the whole Atlas deployment, but I'm sure that I cleaned both. Where could the state be kept? The log lines must orginate from somewhere?
Hi @sunib. It looks like you can't delete the Atlas deployment because of the finalizer. What you can do to remove your deployment, is to edit your atlasdeployment
resource first, and remove the finalizers
section, then you will be able to remove the resource itself with kubectl delete atlasdeployment <your_deployment_name>
.
From the logs, it looks like you removed the namespace with the connection secret. I tried to reproduce it but didn't succeed.
The state is not kept anywhere. Every reconcile
call is new to the operator. The only reason you see these logs is probably because your atlasdeployment
resource is still there in the cluster.
Thank you for your hulp @igor-karpukhin: it was indeed still there with a hanging finalizer!
C:\Users\SimonKoudijs>kubectl -n first-mongo-deployment get AtlasDeployment
NAME AGE
test-db 54d
I managed to remove the namespace while that resource was still there and it was not showing up in the GUI I'm using (Lens).
The steps to remove where in my case:
kubectl -n first-mongo-deployment get AtlasDeployment
kubectl -n first-mongo-deployment patch AtlasDeployment/test-db --type json --patch='[ { "op": "remove", "path": "/metadata/finalizers" } ]'
Then you can delete it without trouble.
Please do note that you cannot run this in windows cmd (due to the json stuff). It then gives an error message error: unable to parse "'[": yaml: found unexpected end of stream
@sunib I'm glad that I could help. What you can also do instead of patch is to run kubectl -n first-mongo-deployment edit AtlasDeployment/test-db
, it will open your default text editor to edit the resource. (see https://jamesdefabia.github.io/docs/user-guide/kubectl/kubectl_edit/)
What did you do to encounter the bug? I played around with the Atlas Operator and created and removed some projects. It's all gone: both my k8s objects and my Atlas objects but I somehow still get logging that tells me this. Where is this state kept?
What did you expect? No logging in my operator of things from the past.
What happened instead? I still get logging every minutor or so: it spams the operator logs so that I don't see my 'real' problems.
Operator Information
Kubernetes Cluster Information
If possible, please include:
So this 'round' of logs is logged by the operator pod. I already tried to kill to pod, and I tripple checkd for the resources they are really not here anymore (also not in another namespace).