neuroinformatics-unit / movement

Python tools for analysing body movements across space and time
http://movement.neuroinformatics.dev
BSD 3-Clause "New" or "Revised" License
77 stars 7 forks source link

add merge_group as workflow trigger #154

Closed niksirbi closed 2 months ago

niksirbi commented 3 months ago

I've enabled Merge Queues for the main branch. For this to work in conjunction with our CI, merge_group has to be added as a workflow trigger. This trigger will be emitted when a PR is marked as "Merge when Ready".

In our day-to-day work this simply means that PRs are accepted by clicking "Merge when Ready" and Github will automatically perform the necessary CI checks and merge PRs in first-in-first-out order.

It was not immediately apparent to me why the new "merge_group" trigger is needed. This post explains it:

The merge queues are based on two other features of GitHub you’ll need to understand first: branch protection rules and automerging of pull requests.

Branch protection rules allow you to force a certain set of ceremonies around merging to your branch. For example, you can forbid directly pushing to main or force a CI run before merging. If you are looking into merge queues, you probably > have both of those already enabled.

The merge queues themselves are also a form of branch protection so this is a new checkbox you’ll need to enable in that interface.

Automerging is a feature on top of the protection rules. It allows you to press the “Merge” button before the CI is finished, and the merge will be performed when/if it’s successful.

Merge queues surprisingly do not replace the CI step before the automerge, but instead adds a separate later step to it. So, the end flow looks like that:

  1. A pull request is created.
  2. CI runs on the pull request’s unmerged code.
  3. At any point before, during or after the CI run the PR is marked for merge.
  4. If the CI succeeded, the pull request is put in the merge queue.
  5. In the merge queue, a separate CI is run on the merged code.
  6. If that succeeds, the pull request is merged (provided the rest of the queue also got merged before that).

The thing that worries me about this is that we would be running our checks multiple times, i.e. during a push on the feature branch, when a PR is opened, then again when it's put in the merge queue, and again after being merged to main. That amount of checking seems excessive to me, and perhaps we need to rethink our CI workflow? Or perhaps we are not at the scale of needing merge queues yet. They seem to be most useful for repos with multiple merges per day.

codecov[bot] commented 3 months ago

Codecov Report

All modified and coverable lines are covered by tests :white_check_mark:

Project coverage is 99.28%. Comparing base (b6a7847) to head (15209c4).

Additional details and impacted files ```diff @@ Coverage Diff @@ ## main #154 +/- ## ======================================= Coverage 99.28% 99.28% ======================================= Files 9 9 Lines 557 557 ======================================= Hits 553 553 Misses 4 4 ```

:umbrella: View full report in Codecov by Sentry.
:loudspeaker: Have feedback on the report? Share it here.

sonarcloud[bot] commented 3 months ago

Quality Gate Passed Quality Gate passed

Issues
0 New issues
0 Accepted issues

Measures
0 Security Hotspots
No data about Coverage
No data about Duplication

See analysis details on SonarCloud

niksirbi commented 3 months ago

I'd like a second opinion on this @lochhh ! Not sure whether it's worth for our use-case or not. My thinking is that we could try it for a while, and after 1-2 months re-evaluate if it's more on the helpful or annoying side.

lochhh commented 2 months ago

I'd like a second opinion on this @lochhh ! Not sure whether it's worth for our use-case or not. My thinking is that we could try it for a while, and after 1-2 months re-evaluate if it's more on the helpful or annoying side.

I agree. If this is too much, we can always revert. I think this can be useful when say dependabot creates a bunch of PRs which can be merged in bulk. We can also play with the timeout, atm I see it is set as 5 minutes, which I think is rather short, since we should allow time for builds and tests to complete.