observatorium / thanos-receive-controller

Kubernetes controller to automatically configure Thanos receive hashrings
Apache License 2.0
94 stars 45 forks source link

Cleanup generate configmap #62

Closed clyang82 closed 3 years ago

clyang82 commented 3 years ago

fixed: https://github.com/observatorium/operator/issues/10

squat commented 3 years ago

Overall this seems reasonable, however, i think there are some tricky edge cases. What if the receive controller is not running as a deployment but as a single pod?

squat commented 3 years ago

Btw, do we want to tackle the deletion of config maps for statefulsets that have been removed (e.g. kubectl delete sts hashring1) in a separate pr?

clyang82 commented 3 years ago

Btw, do we want to tackle the deletion of config maps for statefulsets that have been removed (e.g. kubectl delete sts hashring1) in a separate pr?

Do you think we need to tackle 2 cases:

  1. delete the generated configmap if the controller has been removed.
  2. delete the generate configmap if the statefulset have been removed.

why need case 1 is because we do not know the order of deletion for controller and statefulset. What do you think we just use case 1 for cleanup?

clyang82 commented 3 years ago

Overall this seems reasonable, however, i think there are some tricky edge cases. What if the receive controller is not running as a deployment but as a single pod?

right now, our readme just mention that we deploy the controller using deployment. Can we tackle it if we have such utilization?

squat commented 3 years ago

yes, two different cases looking at two totally orthogonal concepts. One is concerned with the order of deletion for dependent resources and the other with watching resources correctly.

We should definitely address number 2, even if in a different PR, to ensure that users/cluster admins are not confused and so that we don't pollute the API.

concerning 1: I'd like input from the rest of the @observatorium/maintainers team. WDYT about the assumption from this controller that it's running as a deployment? does this make sense? I think we should model this in the same way that other projects do, e.g. Prometheus Operator since this is a solved problem. The approach that PO takes is to set the owner reference to the resource that triggered the creation of the new resource. In the case of a prometheus statefulset, that would be the prometheus custom resource, not the prometheus operator 0 I think this makes sense, though it is opposed to what I mentioned before, that this controller itself should be the owner. I'm sorry for the back and forth @clyang82. I think we just need to decide: is the owner the statefulset or is it the base configmap.

clyang82 commented 3 years ago

Thanks @squat for your solid consideration. Agree with your ideas that who triggers the creation of the new resource is the owner of new resource. most of operators follow this approach - the customer resource is the owner of operand.

But for this case, it is a little complex since we only have one configmap generated for all tenants. in other words, if we define 2 tenants, there are 2 receive sts. which one should be the owner of this generated configmap?

Can we consider this use case - Assume that the thanos-receive-controller is only running in observatorium (correct me if we can run it w/o observatorium) so we can use parent custom resource (observatorium CR) as the owner of this generated configmap. That is my previous PR did. WDYT?

squat commented 3 years ago

decided offline: let's use the base configmap as the resource owner of the generated configmaps :)

kakkoyun commented 3 years ago

Let's merge and fix accessing .Kind and .APIVersion directly in a follow-up

Do we need to create an issue for it? @squat @clyang82

squat commented 3 years ago

:+1: yes I'll do this rn