Open GabrielNagy opened 7 months ago
TestRunScripts/one_script
--- FAIL: TestRunScripts (0.00s)
--- FAIL: TestRunScripts/one_script (0.00s)
scripts_test.go:279:
Error Trace: /home/runner/work/adsys/adsys/internal/testutils/files.go:154
/home/runner/work/adsys/adsys/internal/policies/scripts/scripts_test.go:279
Error: Not equal:
expected: map[string]testutils.treeAttrs{"":testutils.treeAttrs{content:"script3.sh\n", path:"", executable:false}}
actual : map[string]testutils.treeAttrs(nil)
Diff:
--- Expected
+++ Actual
@@ -1,8 +1,2 @@
-(map[string]testutils.treeAttrs) (len=1) {
- (string) "": (testutils.treeAttrs) {
- content: (string) (len=11) "script3.sh\n",
- path: (string) "",
- executable: (bool) false
- }
-}
+(map[string]testutils.treeAttrs) <nil>
Test: TestRunScripts/one_script
Messages: got and expected content differs
TestRunScripts/keeps_running_flag_after_non_machine_shutdown
--- FAIL: TestRunScripts (0.00s)
--- FAIL: TestRunScripts/keeps_running_flag_after_non_machine_shutdown (0.04s)
scripts_test.go:279:
Error Trace: /home/runner/work/adsys/adsys/internal/testutils/files.go:156
/home/runner/work/adsys/adsys/internal/policies/scripts/scripts_test.go:279
Error: Not equal:
expected: map[string]testutils.treeAttrs{"":testutils.treeAttrs{content:"script3.sh\nscript1.sh\nscript2.sh\n", path:"", executable:false}}
actual : map[string]testutils.treeAttrs{"":testutils.treeAttrs{content:"script1.sh\nscript2.sh\n", path:"", executable:false}}
Diff:
--- Expected
+++ Actual
@@ -2,3 +2,3 @@
(string) "": (testutils.treeAttrs) {
- content: (string) (len=33) "script3.sh\nscript1.sh\nscript2.sh\n",
+ content: (string) (len=22) "script1.sh\nscript2.sh\n",
path: (string) "",
Test: TestRunScripts/keeps_running_flag_after_non_machine_shutdown
Messages: got and expected content differs
This is an umbrella issue that will be updated with flaky / transient test failures as we notice them. A test is considered flaky if it passes most of the time but sometimes fails due to race conditions, runner slowness, solar eclipses etc. The tests pass on re-runs and we then forget about them, until they surface again. So, let's keep a list of them so we have something to check against when we see one. Ideally, if specific failures occur too often we should make an effort to see what's happening and provide a fix.
Let's limit ourselves to one test per comment, with the following template:
For better tracking, we should aim to update existing comments rather than posting new ones when the same tests are failing.