Closed h-vetinari closed 2 years ago
I was sufficiently weirded out by this that I raised: https://github.com/pytest-dev/pytest/issues/8986
Duh, it's the scipy.test
-wrapper. 🤦
Still it's surprising IMO that the code of the exception is 0. Will probably patch later.
This is a QEMU bug and doesn't occur when testing on a native machine.
This is a QEMU bug and doesn't occur when testing on a native machine.
That's good to hear! Though we'd still effectively be flying blind in CI, i.e. we wouldn't catch actual regressions that are masked by this bug, no?
Though we'd still effectively be flying blind in CI, i.e. we wouldn't catch actual regressions that are masked by this bug, no?
Same is true for osx-arm64
.
Same is true for
osx-arm64
.
That's a known limitation, and - correct me if I'm wrong - isn't there a daily job somewhere that tests newly built osx-arm packages?
I mean, don't get me wrong, if people are happy with untested PPC packages, I'm not going to stand in the way. It just makes me queasy to me to have no testing of these artefacts at all.
isn't there a daily job somewhere that tests newly built osx-arm packages?
Not anymore
Re testing - there are no longer CI jobs for PPC on the main SciPy repo nor on https://github.com/MacPython/scipy-wheels/. I don't think it's necessarily conda-forge's job to run the full test suite because of that. Running test on QEMU is super slow, so running a subset (e.g., just import tests of all submodules plus a couple of the fast submodule test suites like interpolate.test()
) seems fine to me.
It's easy to reduce to just import tests, with an obvious drop in coverage.
(e.g., just import tests of all submodules plus a couple of the fast submodule test suites like
interpolate.test()
)
Could you make a suggestion of modules? In addition, their respective parts of the test suite may not use sparse matrices anywhere, lest they run into this QEMU bug (@isuruf, is there a bug for this somewhere?)
Could you make a suggestion of modules
odr
, misc
, cluster
, fft
This should now be fixed with the sys.exit
stuff.
I had been quite happy that the PPC builds stopped timing out all the time with the switch from travis to azure, and didn't inspect further since the CI passed.
However, I now saw that the test suite on PPC already fails during test collection, and then passes silently. There's at least a scipy issue involved (probably https://github.com/scipy/scipy/issues/14560), but then also a pytest (or conda-build) issue: why does this do
exit 0
?!?