Open kopf-archiver[bot] opened 4 years ago
Thanks @nolar for the detailed sample which I have successfully adapted and used. I'm wondering now if there is a way for this approach to be extended to situations where I don't know ahead of time of what kind the child resource will be? This would require for a generic on.event
watcher watching all kinds of resources, but as far as I can tell this doesn't seem to be supported by Kopf. Am I missing an obvious solution here or is there some kind of workaround?
@ableuler That answer was written in March 2020. A lot of new features have appeared since then.
I guess, if I would implement parent-children relations again, I would use in-memory indexing for that — instead of the .status.subpods
field stored in the resource.
As for "all kinds of resources" — that feature was also added:
@kopf.on.event(kopf.EVERYTHING)
def fn(**_): ...
@kopf.on.event('example.com', kopf.EVERYTHING) # all resources in a group
def fn(**_): ...
@kopf.on.event(category='all') # the same as "kubectl get all" (except secrets, something else)
def fn(**_): ...
That also works for on-creation/update/deletion/indexing handlers, timers and daemons. Though, it might be not the best idea to do this on a live cluster without filters (e.g. by labels/annotations/when-callback), but it will work.
Thank you very much for the pointer to kopf.EVERYTHING
and sorry for missing that in the docs. For the moment, this (while filtering by label) solves my problem at hand. However, the in-memory indexing looks like a very nice feature that I'll happily take a look at when I might have a chance to refactor the custom object to child relation in my code.
I have another question related to a parent-child relation as described above:
I use a @kopf.on.event('', 'v1', 'pods', labels={'parent-name': 'my-parent'})
type of decorator to watch events of child resources and update the parent (the actual custom resource object) accordingly. This works like a charm, until I stop the kopf-operator for a moment and restart it. In this scenario, events which have happened while the operator was down (such as the pod starting), are missed and the parent never gets updated. Based on the below comment from the docs, I would expect that on restart an initial listing of pod-events would still happen and thus trigger the corresponding handler.
Please note that the event handlers are invoked for every event received from the watching stream. This also includes the first-time listing when the operator starts or restarts. It is the developer’s responsibility to make the handlers idempotent (re-executable with no duplicating side-effects).
What am I missing?
ps: in case you're interested what people are building based on your work, this is the project that I am using kopf for: https://github.com/SwissDataScienceCenter/amalthea
What am I missing?
I managed to answer my own question. In my example I was only reacting to creation or modification events. However, the events that I get during the initial listing on operator restart come without and event type. Handling events without a type properly solved my problem.
Yes, exactly. The event type is None
for the initial listing (as "listing" is not a "watch-stream" in regular Kubernetes terms, but a Kopf-specific simulation or pseudo-streaming).
Hi all,
I'm currently working on porting some code from metacontroller into Kopf.
Metacontroller, gives you the option to receive callbacks when the monitored object changes, or any of its children is updated.
For example, if I'm watching an object of kind
multijob
, which creates an arbitrary number of standard kubernetesjob
s, I would receive a callback if the children fail, restart, succeed, etc, which I can use to update thestatus
field in the parent.I haven't been able to find a clean way to do the same thing in Kopf, other than adding separate listeners for both the parent/children, and within the children listeners update the parent CRD. Of course the example here is simplified, the actual application would have many dependencies and larger hierarchies of objects, and having this kind of inter-dependencies between listeners make them harder to maintain.
Is there any better way to do this? Or is there any feature in Kopf that would make the management of children easier/cleaner?
Thanks!
Related: #58 #264 See also: https://github.com/nolar/kopf/issues/264#issuecomment-562845724
You are right, the only way is —as you said— "adding separate listeners for both the parent/children, and within the children listeners update the parent CRD".
Keep in mind, that Kopf keeps one and only one watch-query (an API request) per resource kind no matter how many handlers are there for that resource kind. So, there should be no problems with APIs.
There is no simpler (i.e. few-liner) solution at the moment.
A better solution is planned though — but rather later than sooner (because: priorities; and my regular employment takes time).
Under the hood, it will be working exactly the same way, just with better DSL for handlers. Some ideation was happening in this gist.
For all those looking for a solution/pattern and coming to this issue — here is an example, which we currently use for ourselves:
Label the children resources with name & namespace of the parent object (assuming they can be in different namespaces; if it is the same namespace by design, only the name is needed).
Watch for children resources that have this label (any value). In the watcher, get the name of the parent resource and patch its status field (e.g.
status.subpods
) with the status of the watched children resource (selected or agregated).Back in the parent resource, react to changes in that status field, and do the calculation on all the children overall statuses.
A sample skeleton code:
Thank you nolar for the detailed update. My current approach looks very similar to the example you provided. Will follow up closely on future releases.