Closed jbreue16 closed 1 week ago
I found evidence that the CI is non-reproducable: Exactly one of the reference tests did not run here: https://github.com/cadet/CADET-Core/actions/runs/10421401942/job/28863536531 Disabling that test, another test all of a sudden stopped working, even though it worked in the previous run above: https://github.com/cadet/CADET-Core/actions/runs/10407694093/job/28847668469
On top of that, the test which failed in https://github.com/cadet/CADET-Core/actions/runs/10421401942/job/28863536531 works just fine when run as the only CI test: https://github.com/cadet/CADET-Core/actions/runs/10422346048
Here are my current results:
I don't remember which branch these are from. I think fix/tests
LRM_DG numerical Benchmark with parameter sensitivities for linear case
testRunner [CI]
testRunner [LRM] [DG]
LRM_DG numerical Benchmark with parameter sensitivities for SMA LWE case
testRunner [LRM] [DG]
testRunner [LRM]
testRunner [CI]
LRM numerical Benchmark with parameter sensitivities for SMA LWE case
testRunner [LRM]
fit/tests
All work
master
:Running testRunner [CI]
completes without errors.
LRM_DG numerical Benchmark with parameter sensitivities for SMA LWE case
testRunner [LRM] [DG]
testRunner [LRM]
testRunner [CI]
LRM numerical Benchmark with parameter sensitivities for SMA LWE case (fails numerics)
testRunner [LRM]
testRunner [CI]
We suspect that our CI sometimes produces different results that are not always reproducable. We should keep track of that and can link CI runs here in this issue to discuss potential causes.
Update: This is also an issue on windows, see the full CI run compared to running the specific test that failed locally running the CI tests on windows now also fails for one test