1Password / onepassword-operator

The 1Password Connect Kubernetes Operator provides the ability to integrate Kubernetes Secrets with 1Password. The operator also handles autorestarting deployments when 1Password items are updated.
https://developer.1password.com/docs/connect/
MIT License
533 stars 60 forks source link

Namespace deletion stuck due to 1p finalisers #55

Closed roderik closed 8 months ago

roderik commented 3 years ago

Your environment

Operator Version: 1password/onepassword-operator:1.0.1

Connect Server Version: 1password/connect-api:1.2.0

Kubernetes Version: v1.20.8-gke.900

What happened?

I had 3 deployments with secrets from the operator in a namespace. (using annotations) I deleted the namespace which deleted everything i could find in the namespace But the namespace itself is stuck due to: 'Some content in the namespace has finalizers remaining: onepassword.com/finalizer.secret in 3 resource instances'

What did you expect to happen?

The namespace to be deleted

Steps to reproduce

Notes & Logs

was able to clear it out using https://stackoverflow.com/questions/52369247/namespace-stuck-as-terminating-how-do-i-remove-it

ch9hn commented 3 years ago

Got the same issue, you can solve it via the following way: kubectl edit <onepassword-item> Remove the finalizer section. kubectl edit <namespace> Remove the finalizer section.

The issue should be gone.

jillianwilson commented 3 years ago

Thanks @chfxr for the work around. Looks like some work needs to be done to ensure proper cleanup of resources so no one else finds themselves in this situation.

volodymyrZotov commented 8 months ago

I was looking into that and unfortunately was not able to delete finalizers from the resources when deleting a namespace.

The reason why it stuck, is that the operator is removed first, and then the application is left with 1Password finalizer, and nothing can remove it.

There are several ways to mitigate it:

  1. Remove all the resources first before deleting the namespace.
  2. Remove finalizers manually by running kubectl edit <resource>.