Closed hanshuebner closed 8 months ago
This needs a test and the getFullName
function needs to be factored out, but what do you think about the approach @Tieske @hishamhm ?
Wouldn't the more typical pattern be to log failures and specifically rerun only previously failed tests? Kinda brainstorming here but it seems like that's a pattern I've used before in other test frameworks.
@alerque The thing with a "failed" list is that it is harder to maintain because entries need to be removed from it once they pass, while the "succeeded" list can just be appended to. In fact, the --filter-out-file
and the --log-success
file can be the same for every (re-)try of the test suite. I am aware that ideally, you'd want to fix your tests so that they are not flaky, but we are in a situation where we cannot quickly do that. Thus, being able to run a large test suite repeatedly until all flaky tests have passed is what we need.
I can add a --filter-file
and a --log-failure
option if you think that someone would find that more useful. The complication is that one would have to specify --filter-file
only in the second and subsequent runs and use the --log-failure
file of the previous run (if any) as input to --filter-file
. I'd prefer the simple solution, even if it might be unconventional. Let me know what you think.
I assumed the use-case for this was hacking on the code until the tests pass. If dealing with flaky tests is the use case would a --retry-failed
or similar way to give failed tests a second (or more) go in one test runner help?
A --retry-failed
option would work in principle, but that would make it less apparent that there are flaky tests. We want to be aware of the flakiness, so a failure of the test suite run is good and a manual intervention is acceptable. What we want to avoid is having to re-run all the tests that passed, as that greatly reduces our productivity. With my proposed approach, we can reduce the time that it takes to re-run a single failed test from 20-30 minutes to under one minute, which is a substantial win.
@alerque Is this PR now mergeable? We would like to start using the added functionality more and it would be good to have it be in the upstream repo for that reason.
I actually just fixed all the nit picks from my code review, but I can't push it because apparently you don't have the "allow maintainers" option checked in this PR.
Can you either allow me access or pull the commits from this branch or split up the commits yourself as in that branch?
Thank you for your review - I've addressed your comments. Where would I enable the "allow maintainers" option for next time?
There should be a check box on the right sidebar in this PR I think...
Allow edits and access to secrets by maintainers
I don't see the options, maybe because I'm not an org administrator of Kong
? Anyway, is this OK now or do you rather want me to use your branch instead?
Thank you for your review - I've addressed your comments. Where would I enable the "allow maintainers" option for next time?
Unfortunately you can't address a comment about not touching irrelevant code by re-touching it again in another commit. That will just turn up 2 irrelevant commits in history on blame/bisect etc. I edited the history in way to completely remove the style change and move the unrelated bit to a different commit.
Yes you might be right about not being able to allow me access to an org you don't administer.
I see three options to fix this now:
You can pull in my branch linked above by adding a remote, then git reset --hard <branch_name_or_sha>
, then force push this so that you send my commits to this PR.
You do the same git revise
and/or git rebase
foo to split and edit existing commits.
I can merge the commits manually outside of GitHub. You'll still be credited with the commit authorship but thy PR will be marked as closed instead of as merged. Up to you if you care.
No sweat off my back for any of the above, they just have tradoffs for you ;-)
I've reset the PR to your branch and force-pushed. Thanks!
Thank you for merging, can you prepare a release as well or can I ping someone else to do it?
These two options are meant to be used together to allow re-running only the failing tests of a test suite. The --log-success option appends the name of each test to the given file, the --filter-out-file option specifies a file that lists tests which should not be run.