Open mathurinm opened 2 years ago
Hi, thanks for the report. Sorry for the problem.
I guess it explains the test fail for https://github.com/benchopt/benchmark_lasso/pull/66
I'll look into this. I did not test with (ana)conda. Thanks for the pointer to the celer issue.
Does pip install --no-build-isolation spams
works? (as adviced by Thomas in his answer)
@samuelstjean any idea how to solve the incompatibility between pyproject.toml
and conda
setup? thanks
Yes, pip install --no-build-isolation spams
works (I have to install numpy manually before).
@mathurinm a temporary fix would be to list numpy
as a requirement for the spams
solver?
Is it possible to pass arguments to pip
through benchOpt
solver requirements?
I'm wondering if the the issue is something else, I am also using conda here. Seems like the problem is that the setup.py looks for whatever numpy is linked to, here the intel mkl. Since the build isolation re-downloads everything, it will now look for whatever this new numpy is using, that is regular regular blas/lapack in another default folder, and it fails since conda has mkl instead (and your system presumably did not have those). That's also 'bad', since you'll end up with numpy linked to some libs, and spams linked to something else because of build isolation.
Might be difficult, but one way would be to look for mkl first (or other stuff) and go through a list of candidates or something rather than what numpy is linked to. I'm not sure how we want to deal with this case since it's difficult to account for everything, and using build without isolation will also lead to the same issue of different linked libs in some cases. Probably why I ended up hardcoding everything now that I think about it.
But in general, I think the problem stems from assuming that whatever numpy is linked to, we also want to use that and they even give us the path for it. I'm not sure it would work on a brand new system where the user has no blas/lapack for example, but grabbing the source with build isolation would likely fail since no headers would be present on the system (similar to here, where the conda folder is hidden in the process). In any case I do not see an easy solution that just works unfortunately, except if we end up providing premade builds for everyone since this goes away in itself in a way.
@samuelstjean Thanks for your input.
Providing pre-built wheels would then be a solution I guess? no compilation, hence no linking...
Kind of, since it would be easy for people in general to get it running. People using it on clusters or fancy stuff (like the new M1 mac, or those arm windows computers I guess) will also need to build it themselves presumably. I'm not sure how to properly do it if you want to support everything here, because computers. Even in my builds I was piggybacking on numpy paths to find the system libs I already installed, and just hardcoding the rest when it was not working for (mac?) and windows. So either we assume that people not using the future builds can figure it out or we put time to properly do it.
@gdurif In a fresh env, even after conda installing mkl:
seems very related tothe issue I have with celer: https://stackoverflow.com/questions/71340058/conda-does-not-look-for-libpthread-and-libpthread-nonshared-at-the-right-place-w/71414439#71414439