kubernetes-sigs / cluster-api

Home for Cluster API, a subproject of sig-cluster-lifecycle
https://cluster-api.sigs.k8s.io
Apache License 2.0
3.45k stars 1.27k forks source link

Report failures of periodic jobs to the cluster-api Slack channel #10520

Open sbueringer opened 2 months ago

sbueringer commented 2 months ago

I noticed that CAPO is reporting periodic test failures to Slack, e.g.: https://kubernetes.slack.com/archives/CFKJB65G9/p1713540048571589

I think think this is a great way to surface issues with CI (and also folks can directly start a thread based on a Slack comment like this)

This could be configured ~ like this: https://github.com/kubernetes/test-infra/blob/5d7e1db75dce28537ba5f17476882869d1b94b0a/config/jobs/kubernetes-sigs/cluster-api-provider-openstack/cluster-api-provider-openstack-periodics.yaml#L48-L55

What do you think?

sbueringer commented 2 months ago

cc @chrischdi @fabriziopandini

k8s-ci-robot commented 2 months ago

This issue is currently awaiting triage.

CAPI contributors will take a look as soon as possible, apply one of the triage/* labels and provide further guidance.

Instructions for interacting with me using PR comments are available [here](https://git.k8s.io/community/contributors/guide/pull-requests.md). If you have questions or suggestions related to my behavior, please file an issue against the [kubernetes/test-infra](https://github.com/kubernetes/test-infra/issues/new?title=Prow%20issue:) repository.
chrischdi commented 2 months ago

Oh wow, yeah that would be a great thing. I just fear that it may pollute the channel too much. But we could try and fail fast by asking for feedback if it is too much later on in the community meeting or via a slack thread/poll.

killianmuldoon commented 2 months ago

do we know if this respects testgrid-num-failures-to-alert? If so it could be great.

sbueringer commented 2 months ago

I'm not sure if it respects that. We could try and rollback if it doesn't?

sbueringer commented 2 months ago

If it still pollutes the channel too much after considering testgrid-num-failures-to-alert we have to focus more on CI :D

(I"m currently guessing that we would get one Slack message for every mail that we get today, but I don't know)

killianmuldoon commented 2 months ago

One slack message per mail would be perfect - more would disrupt the channel

WDYT about enabling it for CAPV first?

killianmuldoon commented 2 months ago

Also fine with making the change and rolling back if it doesn't work

sbueringer commented 2 months ago

One slack message per mail would be perfect - more would disrupt the channel WDYT about enabling it for CAPV first?

Fine for me, we can also ask the OpenStack folks how spamy it is for them today (cc @mdbooth @lentzi90)

lentzi90 commented 2 months ago

For CAPO we get a message for every failure and email only after 2 failures in a row. I think it has been tolerable for us, but that indicates it does not check testgrid-num-failures-to-alert (at least the way we have it configured)

sbueringer commented 2 months ago

Hm okay, every failure is just too much. So we should probably take a closer look at the configuration / implementation. One message for every failure just doesn't make sense for the amount of tests/failures we have (the signal/noise ratio is just wrong)

fabriziopandini commented 2 months ago

+1 to test this if we find a config reasonably noisy (but not too much noisy) cc @kubernetes-sigs/cluster-api-release-team

/priority backlog /kind feature

adilGhaffarDev commented 2 months ago

+1 from my side too. Tagging CI lead @Sunnatillo I will add this to improvement tasks for v1.8 cycle. CI team can look into this one.

Sunnatillo commented 2 months ago

Sounds great. I will take a look

Sunnatillo commented 1 month ago

I guess this testgrid-num-failures-to-alert should help with the amount of the noise. If we set it, for example, to 5 we will be sure that we will receive messages about constantly failing tests. This makes the config to sent the alert after 5 continuous failures.

Sunnatillo commented 1 month ago

/assign @Sunnatillo

lentzi90 commented 1 month ago

@Sunnatillo testgrid-num-failures-to-alert does not affect the slack messages for CAPO at least. Only emails are affected by that in my experience.

Sunnatillo commented 1 month ago

@Sunnatillo testgrid-num-failures-to-alert does not affect the slack messages for CAPO at least. Only emails are affected by that in my experience.

Thank you for the update. I will open the issue in test-infra, try to find the way to do it.

Sunnatillo commented 1 month ago

I opened an issue regarding this in test-infra: https://github.com/kubernetes/test-infra/issues/32687