robusta-dev / robusta

Kubernetes observability and automation, with an awesome Prometheus integration
https://home.robusta.dev/
MIT License
2.55k stars 249 forks source link

Custom resource update triggers #1536

Open Antrakos opened 2 weeks ago

Antrakos commented 2 weeks ago

Is your feature request related to a problem? Hi. We've been using robusta for quite some time already for tracking updates to deployments and stateful sets. With our platform expansion there's a need to monitor a few more resources, namely PodTemplate, CronJob and flux HelmRelease. We have a simple config which monitors image field and sends all changes to kafka for later processing.

customPlaybooks:
  - triggers:
      - on_deployment_update:
          change_filters:
            ignore:
              - status
              - metadata.generation
              - metadata.resourceVersion
              - metadata.managedFields
              - spec.replicas
            include:
              - image
      - on_statefulset_update:
          change_filters:
            ignore:
              - status
              - metadata.generation
              - metadata.resourceVersion
              - metadata.managedFields
              - spec.replicas
            include:
              - image
    actions:
      - resource_babysitter: {}
    sinks:
      - kafka

Recently kubewatch introduced custom resource monitoring in 2.8.0 and I was able to track changes to the necessary resources. However I can't make robusta send these events to kafka. There's no generic trigger to use where I can specify a kind so I tried on_kubernetes_any_resource_update. When I eventually customized kubewatch (forwarder) to send change events, robusta started printing error logs like these (I removed lots of resource fields for brevity):

2024-08-29 10:54:43.749 INFO     classes for kind cronjobs cannot be found. skipping. description A `cronjobs` in namespace `common` has been `Updated`: `some-job`
2024-08-29 10:54:43.750 ERROR    Failed to build execution event for update-cronjobs-batch/v1, Event: k8s_payload=IncomingK8sEventPayload(operation='update', kind='cronjobs', apiVersion='batch/v1', clusterUid='TODO', description='A `cronjobs` in namespace `common` has been `Updated`:\n`some-job`', obj={'apiVersion': 'batch/v1', 'kind': 'CronJob', 'metadata': {'name': 'some-job', 'namespace': 'common'}}, oldObj={'apiVersion': 'batch/v1', 'kind': 'CronJob', 'metadata': {'name': 'some-job', 'namespace': 'common'}})
Traceback (most recent call last):
  File "/app/src/robusta/core/playbooks/playbooks_event_handler_impl.py", line 61, in handle_trigger
    execution_event.sink_findings = sink_findings
    ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
AttributeError: 'NoneType' object has no attribute 'sink_findings'

Describe the solution you'd like I'd like to have a set of generic triggers where I can specify kind (and apiVersion) to match any CRD, for instance:

customPlaybooks:
  - triggers:
      - on_resource_update:
          match:
              kind: 
                - deployments
                - statefulsets
                - cronjobs
                - podtemplates
                - helmreleases
          change_filters:
            ignore:
              - status
              - metadata.generation
              - metadata.resourceVersion
              - metadata.managedFields
              - spec.replicas
            include:
              - image
    actions:
      - resource_babysitter: {}
    sinks:
      - kafka

Describe alternatives you've considered I can't find any documentation on how to introduce custom triggers or extend existing events. There are only information about custom actions

Additional context I am running latest 0.16.1 version of robusta with latest 2.8.0 version of kubewatch.

github-actions[bot] commented 2 weeks ago

Hi 👋, thanks for opening an issue! Please note, it may take some time for us to respond, but we'll get back to you as soon as we can!

aantn commented 1 week ago

Hi @Antrakos, thanks for reporting. It isn't supported yet on the Robusta side.

Would you be interested in contributing a feature for this? We have docs here on setting up a dev environment and I'm happy to point you at the relevant code you would need to modify.

Out of curiousity, what are you doing with the events once they reach kafka?