Closed mfbalin closed 2 months ago
To trigger regression tests:
@dgl-bot run [instance-type] [which tests] [compare-with-branch]
;
For example: @dgl-bot run g4dn.4xlarge all dmlc/master
or @dgl-bot run c5.9xlarge kernel,api dmlc/master
@frozenbugs @Rhett-Ying Is there a way to make this probabilistic logic into a test decorator for the flaky tests? I am not a Python expert.
I'd rather to lower it to 0.6. We will move this to regression framework and remove this.
@frozenbugs I refactored the logic into a function. If more tests fail in the future before these tests are removed, we can simply reuse the wrapper to make them more robust.
Description
We run the test for a maximum of 10 times to see if it will ever succeed 80% over time. If it succeeds on 1st try, then no extra iteration will be performed. If it fails on 1st try, then it will need to succeed on the following 4 tries to count as test pass. If it fails 3 times, then it will count as failure as %80 success is not possible anymore.
Fixes https://github.com/dmlc/dgl/pull/7506#issuecomment-2216813220
Checklist
Please feel free to remove inapplicable items for your PR.
Changes