Closed diederikdehaas closed 1 year ago
Works for me :tm: https://gitlab.com/ynezz/openwrt-ucode/-/jobs/4321288368 so can you provide more details about your environment?
This is the pipeline that failed for me: https://salsa.debian.org/diederik/ucode/-/pipelines/529804
So perhaps somehow related to your runner/kernel? I would try following:
iff --git a/tests/custom/03_stdlib/55_clock b/tests/custom/03_stdlib/55_clock
index 109ce218e22d..370e3acaa41a 100644
--- a/tests/custom/03_stdlib/55_clock
+++ b/tests/custom/03_stdlib/55_clock
@@ -19,8 +19,8 @@ default realtime one.
assert(delta2 >= 0, "Monotonic clock went backwards!");
printf("%.J\n", [
- (delta1 >= 240000000 && delta1 <= 260000000) ? true : "unexpected delta: " + delta1,
- (delta2 >= 240000000 && delta2 <= 260000000) ? true : "unexpected delta: " + delta2
+ (delta1 >= 240000000 && delta1 <= 270000000) ? true : "unexpected delta1: " + delta1,
+ (delta2 >= 240000000 && delta2 <= 270000000) ? true : "unexpected delta2: " + delta2
]);
%}
-- End --
- "unexpected delta: 265114782",
Would be nice to know which delta failed, please re-run with following:
diff --git a/tests/custom/03_stdlib/55_clock b/tests/custom/03_stdlib/55_clock
index 109ce218e22d..b63919b91ddf 100644
--- a/tests/custom/03_stdlib/55_clock
+++ b/tests/custom/03_stdlib/55_clock
@@ -19,8 +19,8 @@ default realtime one.
assert(delta2 >= 0, "Monotonic clock went backwards!");
printf("%.J\n", [
- (delta1 >= 240000000 && delta1 <= 260000000) ? true : "unexpected delta: " + delta1,
- (delta2 >= 240000000 && delta2 <= 260000000) ? true : "unexpected delta: " + delta2
+ (delta1 >= 240000000 && delta1 <= 260000000) ? true : "unexpected delta1: " + delta1,
+ (delta2 >= 240000000 && delta2 <= 260000000) ? true : "unexpected delta2: " + delta2
]);
%}
-- End --
Would be nice to know which delta failed, please re-run with following:
I started with this one ... and that succeeded ?!?
I did cancel the running of the build with ...
as I assumed they were irrelevant.
I will re-run the pipeline but this time I'll let them all run.
So perhaps somehow related to your runner/kernel?
That could be. I did ask but haven't yet received an answer, but my guess would be 6.1 (as Bookworm will have) and otherwise 5.10 (as Bullseye has)
While scanning through the build output, I did notice the following (starting from line 816):
Checking build/CMakeFiles/3.18.4/CompilerIdC/CMakeCCompilerId.c: __sun;sun...
1/27 files checked 2% done
Checking chunk.c ...
2/27 files checked 3% done
Checking compiler.c ...
compiler.c:2339:2: error: Address of local auto-variable assigned to a function parameter. [autoVariables]
compiler->patchlist = &p;
^
compiler.c:2387:2: error: Address of local auto-variable assigned to a function parameter. [autoVariables]
compiler->patchlist = &p;
^
compiler.c:2497:2: error: Address of local auto-variable assigned to a function parameter. [autoVariables]
compiler->patchlist = &p;
^
compiler.c:2664:2: error: Address of local auto-variable assigned to a function parameter. [autoVariables]
compiler->patchlist = &p;
^
compiler.c:3271:70: error: Signed integer overflow for expression '0xffff<<16'. [integerOverflow]
uc_compiler_emit_u32(compiler, 0, source->exports.count | (0xffff << 16));
^
Checking compiler.c: NO_COMPILE...
3/27 files checked 16% done
Checking examples/exception-handler.c ...
4/27 files checked 16% done
Checking examples/execute-file.c ...
5/27 files checked 17% done
Checking examples/execute-string.c ...
6/27 files checked 17% done
Checking examples/native-function.c ...
7/27 files checked 17% done
Checking examples/state-reset.c ...
8/27 files checked 18% done
Checking examples/state-reuse.c ...
9/27 files checked 18% done
Checking lexer.c ...
10/27 files checked 21% done
Checking lib.c ...
Checking lib.c: __APPLE__...
11/27 files checked 31% done
Checking lib/fs.c ...
Checking lib/fs.c: O_LARGEFILE...
12/27 files checked 35% done
Checking lib/math.c ...
13/27 files checked 35% done
Checking lib/nl80211.c ...
14/27 files checked 45% done
Checking lib/resolv.c ...
Checking lib/resolv.c: __APPLE__...
15/27 files checked 48% done
Checking lib/rtnl.c ...
16/27 files checked 63% done
Checking lib/struct.c ...
Checking lib/struct.c: __powerc...
17/27 files checked 68% done
Checking lib/ubus.c ...
Checking lib/ubus.c: HAVE_NEW_UBUS_STATUS_CODES...
18/27 files checked 75% done
Checking lib/uci.c ...
19/27 files checked 77% done
Checking lib/uloop.c ...
Checking lib/uloop.c: HAVE_ULOOP_TIMEOUT_REMAINING64...
Checking lib/uloop.c: __APPLE__...
20/27 files checked 80% done
Checking main.c ...
21/27 files checked 82% done
Checking program.c ...
22/27 files checked 84% done
Checking source.c ...
23/27 files checked 85% done
Checking tests/fuzz/test-fuzz.c ...
24/27 files checked 85% done
Checking types.c ...
types.c:400:5: warning: The address of local variable 'pv' might be accessed at non-zero index. [objectIndex]
s[i] = str[i];
^
types.c:394:15: note: Address of variable taken here.
s = (char *)&pv + 1;
^
types.c:400:5: note: The address of local variable 'pv' might be accessed at non-zero index.
s[i] = str[i];
^
Checking types.c: __clang_analyzer__...
25/27 files checked 90% done
Checking vallist.c ...
26/27 files checked 92% done
Checking vm.c ...
Checking vm.c: __clang_analyzer__...
27/27 files checked 100% done
This was from your output btw. It (obviously) didn't cause a failure, but figured I'd mention it anyway.
EDIT: FTR: I have no knowledge off/about the code, so if this is expected/intended then just ignore my remark.
So perhaps somehow related to your runner/kernel?
That could be. I did ask but haven't yet received an answer, but my guess would be 6.1 (as Bookworm will have) and otherwise 5.10 (as Bullseye has)
Or maybe the load/speed on the runner? I noticed 'your' runner finished the tests quicker then 'mine':
Dealing with the failure may account for some time difference, but not ~88 secs.
I started with this one ... and that succeeded ?!? I did cancel the running of the
build with ...
as I assumed they were irrelevant. I will re-run the pipeline but this time I'll let them all run.
And it succeeded again. Attribute the initial failure to the phase of the moon?
EDIT: answering myself: Yes
This happens occasionally when the CI runner is very slow / loaded, there's too much time elapsing between the consecutive calls to clock()
. I guess the check could be relaxed to simply check whether the second timestamp is larger than the first one.
When I ran the
.gitlab-ci.yml
workflow, I encountered the following error:On a subsequent run, I got the same error, which makes me think there's an error in the code?