Closed maresb closed 2 years ago
Hi! This is the friendly automated conda-forge-linting service.
I just wanted to let you know that I linted all conda-recipes in your PR (recipe
) and found it was in an excellent condition.
@conda-forge-admin, please rerender
Hi! This is the friendly automated conda-forge-webservice. I tried to rerender for you, but it looks like there was nothing to do.
@twiecki please feel free to commit directly yourself. I enabled edits by maintainers.
Some broken pip dependencies:
And under Windows, it's showing a stack overflow. :persevere:
Still waiting for more stuff to complete.
And under Windows, it's showing a stack overflow. 😣
The test suite is too large to run in one test job on Windows. It's also reproducible locally that none of these tests fail individually, but give the stack overflow when all of them run in the same pytest
call.
Hi! This is the friendly automated conda-forge-linting service.
I was trying to look for recipes to lint for you, but it appears we have a merge conflict. Please try to merge or rebase with the base branch to resolve this conflict.
Please ping the 'conda-forge/core' team (using the @ notation in a comment) if you believe this is a bug.
Hi! This is the friendly automated conda-forge-linting service.
I just wanted to let you know that I linted all conda-recipes in your PR (recipe
) and found it was in an excellent condition.
I think we should figure out what adding numba
is doing and solving the underlying issue, because it really doesn't make any sense to add it here.
@twiecki I completely agree, I just want to reach a state where the CI tests are passing.
Now it's saying pytest failed, even though there were no errors and only warnings? Any idea what's going on?
I am getting closer, thanks very much @michaelosthege for the suggestion to run the tests individually to avoid the stack overflow under Windows.
Now just Windows is failing with some theano compilation errors:
3.7:
E Exception: ('Compilation failed (return status=1): C:\Users\VSSADM\~1\AppData\Local\Temp\ccbQJyYb.s: Assembler messages:\r. C:\Users\VSSADM\~1\AppData\Local\Temp\ccbQJyYb.s:262: Error: invalid register for .seh_savexmm\r. ', 'FunctionGraph(Elemwise{ge,no_inplace}(v, <TensorType(float64, (True,))>))')
E Exception: ('Compilation failed (return status=1): C:\Users\VSSADM\~1\AppData\Local\Temp\cckOnRXa.s: Assembler messages:\r. C:\Users\VSSADM\~1\AppData\Local\Temp\cckOnRXa.s:81: Error: invalid register for .seh_savexmm\r. ', 'FunctionGraph(Elemwise{ge,no_inplace}(v, <TensorType(float64, (True,))>))')
E Exception: ("Compilation failed (return status=1): C:\Users\VSSADM\~1\AppData\Local\Temp\cctE7GCa.o: In function
run':\r. C:/Users/VssAdministrator/AppData/Local/Theano/compiledir_Windows-10-10.0.14393-SP0-Intel64_Family_6_Model_79_Stepping_1_GenuineIntel-3.9.6-64/tmpv8iwac39/mod.cpp:1612: undefined reference to
dgemm'\r. C:\Users\VSSADM\~1\AppData\Local\Temp\cctE7GCa.o:mod.cpp:(.rdata$.refptr.dgemm[.refptr.dgemm]+0x0): undefined reference to `dgemm'\r. collect2.exe: error: ld returned 1 exit status\r. ", 'FunctionGraph(BatchedDot(<TensorType(float64, row)>, <TensorType(float64, row)>))')
Do we need to mess with cxxflags? Or is this a BLAS thing? Any ideas @twiecki or @michaelosthege?
This problems was mentioned here: https://github.com/pymc-devs/pymc3/issues/4749 (seems to be a m2g64-toolchain
issue, can you confirm that it gets installed?), as well as here: https://github.com/Theano/Theano/issues/6693 (solved by a compiler flag).
Perhaps the latter is the safer fix?
In the testing environment, m2w64-toolchain
is indeed installed here.
I'm not actually sure how one should set compiler flags from conda. Primarily because I don't work with C++ much, so it's quite mysterious to me. And secondly because it seems very risky and complicated. For example, if we mess with the CXXFLAGS environment variable, what if it conflicts with other programs? I'm pretty stuck on a good way forward. Any suggestions?
The compiler flag I think we would set in aesara
or pymc3
, we already do this here: https://github.com/pymc-devs/pymc3/blob/main/pymc3/__init__.py#L40 Could we test this here?
Curious if @michaelosthege has any ideas.
The compiler flag I think we would set in
aesara
orpymc3
, we already do this here: https://github.com/pymc-devs/pymc3/blob/main/pymc3/__init__.py#L40 Could we test this here?Curious if @michaelosthege has any ideas.
I'm a biologist. This is pretty much black magic to me. My knowledge about compiler flags ends at the point where I know they exist and they do stuff. With the how the compiler does things I guess.
I think we should figure out what adding numba is doing and solving the underlying issue, because it really doesn't make any sense to add it here.
Maybe check out this for comparison? https://github.com/conda-forge/numba-feedstock/blob/master/recipe/meta.yaml
@conda-forge-admin, please rerender
@conda-forge-admin, please rerender
@conda-forge-admin, please rerender
@conda-forge-admin, please rerender
Hi! This is the friendly automated conda-forge-webservice. I tried to rerender for you, but it looks like there was nothing to do.
@conda-forge-admin, please rerender
I rebased on the new version.
CI seems to be down. I'll probably have to try again tomorrow.
restarted the tests (I think).
@conda-forge-admin, please rerender
In Python 3.7 there is a missing DLL when loading netcdf4.
In 3.8 and 3.9 there is an error about the following failed assertion in check_logp()
.
domains = paramdomains.copy()
domains["value"] = domain
for pt in product(domains, n_samples=n_samples):
pt = Point(pt, model=model)
> assert_almost_equal(
logp(pt),
logp_reference(pt),
decimal=decimal,
err_msg=str(pt),
)
E AssertionError:
E Arrays are not almost equal to 6 decimals
E {'alpha': array(20.), 'beta': array(0.9), 'value': array(100.)}
E Mismatched elements: 1 / 1 (100%)
E Max absolute difference: 3.86856262e+25
E Max relative difference: 4.70326902e-16
E x: array(-8.225263e+40)
E y: array(-8.225263e+40)
pymc3\tests\test_distributions.py:611: AssertionError
It looks like numpy's definition of being equal to 6 digits means "6 digits after the 1's place" instead of "6 significant digits".
@maresb I merged the mkl-services PR. Should we revisit this?
@twiecki, sure! I just rebased on master, so hopefully some of the latest changes have fixed some of the issues.
I don't have any time to work on this, but please feel free to commit to this branch.
Also please see the commented-out changes. For instance, we may need to pin the semver
package. Unfortunately I've fairly completely forgotten where I was at.
@conda-forge-admin, please rerender
Windows is segfaulting right away on tests\test_pickling.py
even though it's running as an individual test. I can move that one to the end and see if the other tests make any more progress.
OSX/3.7 stalled out mysteriously. Maybe it was a glitch.
I'll rebase master and try again...
@conda-forge-admin, please rerender
Hi! This is the friendly automated conda-forge-webservice. I tried to rerender for you, but it looks like there was nothing to do.
Windows tests are still segfaulting. :( @michaelosthege, do you have any ideas?
@maresb the tests segfault on Windows when you run too much at once. Where "too much" is something like less than a quarter of all tests. Looks like the CI here is trying to run everything at once..
@michaelosthege, on precisely this advice from you, I already split the tests so that each test file runs individually. :wink: Unfortunately it's still segfaulting on Windows, so I was hoping that you might have another clever idea.
@maresb oh I see. In one log it shows this:
When I see this on my PC I trash the environment right away and start over. It may work, but this is definitely not normal. Could be unrelated though.
Do you know which test causes the segfault? I couldn't see that from the logs.. Also maybe there's still some notion of "process session" and the split-up didn't help. The CI on our main repo splits things across jobs that run more separate than the tests here(?)
This is now mostly obsolete, so I'm closing this out.
It could however be a useful reference in the future if we want to run pytest in pymc-feedstock
.
Checklist
0
(if the version changed)conda-smithy
(Use the phrase code>@<space/conda-forge-admin, please rerender in a comment in this PR for automated rerendering)