canonical / prometheus-scrape-config-k8s-operator

https://charmhub.io/prometheus-scrape-config-k8s
Apache License 2.0
1 stars 1 forks source link

Improve reliability when operator units are churned #4

Closed mmanciop closed 2 years ago

mmanciop commented 2 years ago

When a prometheus-scrape-config unit is churned, it does get relation_joined events, but notrelation_created. As such, we must also listed to that for upstream relations to ensure we correctly propagate scrape jobs downstream.

To reproduce, delete the prometheus-scrape-config pod with the app scaled to one, and see what happens :D

mmanciop commented 2 years ago

You mean like this PR description? (Granted, it is not a commit message.)

When a prometheus-scrape-config unit is churned, it does not get relation_joined events, but only relation_created. As such, we must also listed to that for upstream relations to ensure we correctly propagate scrape jobs downstream.

On Wed, Dec 8, 2021 at 2:41 PM Balbir Thomas @.***> wrote:

@.**** commented on this pull request.

In src/charm.py https://github.com/canonical/prometheus-scrape-config-k8s-operator/pull/4#discussion_r764875651 :

     self.framework.observe(

self.on[self._prometheus_relation_name].relation_created, self._set_jobs_to_new_downstream, )

  • self.framework.observe(

No dispute here, we are alligned. All I was saying was I would be genuinely interested to understand those different use cases. Encoding the experience that led you to this change in a commit message or into comments in code will help future maintainers of this project.

— You are receiving this because you authored the thread. Reply to this email directly, view it on GitHub https://github.com/canonical/prometheus-scrape-config-k8s-operator/pull/4#discussion_r764875651, or unsubscribe https://github.com/notifications/unsubscribe-auth/ADK675V4TU4RDA25C4ZUCR3UP5OA3ANCNFSM5JTV3LUA .