For roughly the past month, I have collected data on flaky CI tests. The initial idea was to mark them as flaky, but as per pytest's docs on flaky tests, that should never be a long-term solution. Instead, tests should be (randomly) re-ordered, re-written for more atomic assertions, or split up into different groups to find the root cause of the flaky behavior and eliminate it. We will have to see when time permits this. For now, we could mark them as flaky to save us from re-running them manually.
Here are the flaky tests of this repository I gathered so far:
Flaky tests
Auxiliary
Runtime error DB connection {#runtime-error-db-connection}
For roughly the past month, I have collected data on flaky CI tests. The initial idea was to mark them as flaky, but as per pytest's docs on flaky tests, that should never be a long-term solution. Instead, tests should be (randomly) re-ordered, re-written for more atomic assertions, or split up into different groups to find the root cause of the flaky behavior and eliminate it. We will have to see when time permits this. For now, we could mark them as flaky to save us from re-running them manually. Here are the flaky tests of this repository I gathered so far:
Flaky tests
Auxiliary
Runtime error DB connection {#runtime-error-db-connection}
Names are shortened to
message_ix_models/tests
as the starting directory.