Open pando85 opened 2 months ago
Yeah, this stuff is a bit awkward / not finalised at the moment: https://github.com/kube-rs/kube/issues/1472
The shared interface, in its main loop does basically two things
apply_watcher_event
:: same as for regular reflectorsdispatch_event
:: to subscribers, only for shared reflectorsAnd the dispatcher there currently has the setup of not passing up the delete events. Sometimes this is right (watches/owns relation typically don't care about deletes), sometimes it is not (root stream cares about deletes because it needs to finalize).
For regular (non-shared) reflectors, the choice of whether to receive Delete events actually happens later in the filter pipeline by using one of applied_objects or touched_objects either implicitly or explicitly in the streams api.
The subscriber cannot use those flattener interfaces because the stream Item
in the shared stream interfaces require an Arc<Child>
(see e.g. owns_shared_stream).
I don't have a great answer for how to suitably fix this today. We were discussing this a bit with @mateiidavid a while ago, and we were considering doing reworking the EventFlatten
interface (partly started in https://github.com/kube-rs/kube/issues/1517). We might need to go further here; allow the two different stream sources to be "flattened/filtered" (terminology is wrong now, pr is fixing that).
Thanks for your very explanatory answer. Now I understand better the complexity and the caveats that the current crate has.
I can offer myself if I can solve small things that have already been decided. I will keep an eye on this.
I love this project and I think that it is great to add these missing features to push forward Rust operators. Thank you, @clux for all your work.
Just to echo out @clux it would be great to fix this. I think at the time when we first implemented this, we made it so that a shared reflector receives a reference to an object and then retrieves it from the store. IIRC for deletes this did not work (since you can't retrieve a delete object from the store; store operation happens first).
I'd have to page back into this, but we could perhaps:
...in addition to everything suggested above. I think the pagination work changed a few of our assumptions and possibly made it easier for us to come up with a more robust API.
Current and expected behavior
I have a dummy controller that creates a deployment based on a CRD. I'm watching the deployments this controller owns and triggering reconcile events on changes there.
Current behavior doesn't dispatch delete events: current code
I expect to apply changes if my CRD defines a deployment that should be running and that deployment disappears.
Possible solution
I did a workaround sending a bulk to reconcile on
Event::Delete(_)
to the controller through ampsc::channel
. This is an ugly workaround because of the architecture and because it forces all objects in the controller to reconcile.Additional context
No response
Environment
Configuration and features
Affected crates
kube-runtime
Would you like to work on fixing this bug?
yes