nodejs / TSC

The Node.js Technical Steering Committee
598 stars 134 forks source link

Let's talk about the CI situation #1614

Open aduh95 opened 2 months ago

aduh95 commented 2 months ago

As of recently, especially with https://github.com/nodejs/build/issues/3887, dealing with the CI when preparing a release or when trying to land a PR has become very frustrating. I tried to list all the pain points, let me know if I forgot something:

I think we need to discuss:

  1. Is there a way to better detect flaky tests before landing them?
  2. How can we handle the immediate situation?

I'm opening this in the TSC repo, because it's kind of a meta discussion that I'm guessing is not going to be of much interest to folks following nodejs/node, but of course anyone is welcome to participate to the discussion.

aduh95 commented 2 months ago
  • Is there a way to better detect flaky tests before landing them?

IMO, the correct response to that would be automation. Our previous attempt was https://github.com/nodejs/node/pull/43954, which I don't think have helped a lot unfortunately. Maybe we should reconsider https://github.com/nodejs/node/issues/40817: we could have a bot that checks the Jenkins CI result when a resume-ci label is added to a PR:

  1. First it checks whether any of the failing test is one that's being changed in this PR. If it is, it adds a comment on the PR explaining why it won't resume CI.
  2. Then it checks if all the falling tests have a corresponding issue. If not, it adds a comment on the PR explaining why it won't resume CI.
  3. Otherwise, it resumes the CI.

It should solve the "no one bothers to check the failures" and "no one reports the flaky tests" – it might not help with the "no one is investigating the flakes" though.

aduh95 commented 2 months ago

Another thing that would help would be to mark on the Current release line as flaky all tests which have a corresponding open issue. That would greatly help with making release proposal ready, and also external team/projects that run our tests. That way, main can still have a high bar for when a test can be marked as flaky, only the release line (on which we don't do development) can be for forgiving.

marco-ippolito commented 2 months ago

In the past I tried to automate marking tests as flaky https://github.com/nodejs/node-core-utils/pull/746 but closed for lack of time, if someone wants to give it a shot. The idea is that if a test has failed on x different PRs, create a PR to mark it as flaky and issue.

joyeecheung commented 2 months ago

I think we should also discuss what happens after a test is marked flaky - is it just going to rot in the status file? I think that has been effectively happening and could actually be contributing to the flakes. e.g. some of the flakes might be caused by the same V8 task runner bug e.g. https://github.com/nodejs/node/issues/47297, some of those have been marked flaky but if the bug never gets dealt with, it could just continue to flake other tests under certain circumstances.

gireeshpunathil commented 2 months ago

is it reasonable to say all the current problems are stemming from the fact that there are less folks looking at the CI? or is there a new pattern? (IIRC, our CI was very healthy when there were many folks tending it)

yharaskrik commented 2 months ago

@juristr this is probably a much larger discussion with the Node team but it seems like Node could benefit from the flaky target detection that Nx has no? I'm not too familiar with Node's CI setup right now and I would assume that Nx Cloud doesn't have all of the OS targets Node would need but maybe there's a discussion to be had here?

anonrig commented 2 months ago

If you look into jenkins-alerts repository, we have a warning/incident almost everyday. Most of the time, it is related to host machine related issues. https://github.com/nodejs/jenkins-alerts/issues

richardlau commented 2 months ago

If you look into jenkins-alerts repository, we have a warning/incident almost everyday. Most of the time, it is related to host machine related issues. https://github.com/nodejs/jenkins-alerts/issues

Every issue in that repository will be a host machine issue as the whole point of the repository is to monitor for machine related issues -- it only looks to see if the machine is either running out of disk space or offline in Jenkins.

jasnell commented 2 months ago

The CI is so flaky it's systematically takes several attempts to get a passing one.

I've had PRs that have required 10-16 CI runs to get even a flaky success. Now the OSX runner has been jammed up for days with no clear indication how to fix it or who can fix it.

mhdawson commented 2 months ago

I think @aduh95 concrete suggestion of the bot which:

  1. First it checks whether any of the failing test is one that's being changed in this PR. If it is, it adds a comment on the PR explaining why it won't resume CI.
  2. Then it checks if all the falling tests have a corresponding issue. If not, it adds a comment on the PR explaining why it won't resume CI.
  3. Otherwise, it resumes the CI.

Makes a lot of sense to me. I'm +1 on that and I think it would help get tests marked as flaky.

In terms of actually investigating/fixing issues I really wish we could find some way to get people to volunteer/focus on that but don't have any new ideas on that front. If we had a number of people who would commit to spending X amount of time each week/month on doing that either separately or together then I would join that effort, but in the absence of some level of critical mass of people committing to contribute I think any one individual sees an unsurmountable task.

thomasrockhu-codecov commented 2 months ago

Hey, Tom from @codecov here.

I know this may come off as me being a shill (I am), but as Node already uses Codecov, I thought I'd mention that we are building a flaky test product to help identify and highlight flakes. This is still a pretty new feature, but I'd love to see it actually be useful for a large codebase.

Here is a screenshot from the blog post that shows what a flaky test looks like

image

Here is a link to the source of the screenshot on GitHub.

richardlau commented 2 months ago

Finding/highlighting flakes is not the problem (which is not to say more could be done). We are already detecting test failures that happen for CIs across multiple PRs in https://github.com/nodejs/reliability. BuildPulse was added as an experiment for Windows builds in https://github.com/nodejs/build/pull/3653 and I have no idea if anyone has been looking at the results.

This is fundamentally a people problem -- we need to somehow motivate people to look at the flakes and decide whether the tests can be fixed or should be removed. Keeping long term flakey tests in Node.js is just building up problems for later.

And/or be more proactive in detecting when a PR introduces a flake into the system.

As a warning we also need to not become too reliant on the Resume Build feature of the Multijob Plugin in Jenkins as that plugin is deprecated -- if a future Jenkins update ever breaks that plugin we'd need to migrate (most likely to Jenkins pipelines) and we'd lose the Resume Build feature since it's part of that plugin (I don't know if Pipelines has an equivalent).

thomasrockhu-codecov commented 2 months ago

Got it, thanks for the info @richardlau! I suppose then that our flaky test product is not useful right now for this case.

lpinca commented 2 months ago

There is a common issue (deadlock during process shutdown) behind many (all?) timeout failures in our CI. I've opened a specific issue for this: https://github.com/nodejs/node/issues/54918.

pmarchini commented 2 months ago

Hi everyone, while working on some flaky tests, I implemented this very basic tool (https://github.com/pmarchini/giogo) that leverages cgroups to limit resources (memory, CPU, IO).

Using this approach, I was able to reproduce the flakiness of some of the tests on my local machine. Note: in my case, I was working on test runner tests, and limiting the processor allowed me to reproduce the issue most of the time.

I hope this could be helpful to someone else.
If you have any ideas or want to contribute, feel free to open a PR

bnoordhuis commented 1 month ago

Instead of technical solutions, there's a simple social one: pay someone to investigate and fix flaky tests. It's soul crushing work, no one is going to do that for fun.

The foundation is still swimming in money, right? Might as well put it to better use than marketing and lawyers.

richardlau commented 1 month ago

Instead of technical solutions, there's a simple social one: pay someone to investigate and fix flaky tests. It's soul crushing work, no one is going to do that for fun.

FYI That is being planned. The statement of work is being drafted in https://github.com/nodejs/TSC/issues/1629.

bnoordhuis commented 1 month ago

I just saw that, nice! The fact it's a paid position got kind of buried but it's there.