Closed leofang closed 11 months ago
Hi! This is the friendly automated conda-forge-linting service.
I wanted to let you know that I linted all conda-recipes in your PR (recipe
) and found some lint.
Here's what I've got...
For recipe:
@conda-forge-admin, please rerender
@beckermr Is there a way to retrieve conda package artifacts built in this PR for testing?
Yes. Give me a minute.
@conda-forge-admin, please rerender
Is there a way to retrieve conda package artifacts built in this PR for testing?
@oleksandr-pavlyk See the Azure page: https://dev.azure.com/conda-forge/feedstock-builds/_build/results?buildId=811656&view=artifacts&pathAsName=false&type=publishedArtifacts You can move your cursor to the right, and a 3-dots icon would show up for you to download the artifacts.
Btw, isn't Intel MPI binary compatible with MPICH by design?
I think so, based on the ABI Initiative, but I've never validated it myself.
I've had enough trouble with that in various spots that it'd be safer to ship separate builds for now even though they may formally be Abi compatible.
Please report all MPICH ABI issues to MPICH via GitHub. The ABI initiative is important and I have relied on it for a decade. I don't know what you're seeing that's not working but it's a surprise.
Btw, isn't Intel MPI binary compatible with MPICH by design?
I did not manage not figure out how we could take advantage of that in conda-forge. Maybe using features: [mpich_abi]
, but that's probably not enough.
I've had enough trouble with that in various spots
I can confirm that, at least for mpi4py (which uses almost all of the MPI 3.1/4.0 API), the MPICH ABI is working perfectly fine when replacing MPICH with Intel MPI 2021.10. This claim is under CI testing here.
PS: Debian/Ubuntu have had semi-broken the Linux MPICH ABI, simply because they use a different name for the MPI shared library than libmpi.so.12
, and they do not provide a symlink. That that would only affect conda-forge users if they install MPICH external dummy package because they want to use the distro-provided MPICH.
Sorry @jeffhammond! I should have been more specific. The issues i've seen are not ABI but are instead system/vendor specific quirks in how MPI is packaged and interacts with the scheduler on various sytems. This makes me hesitant to rely on run-time swapping. However, from the point of view of conda-forge's packages, this indeed may not be an issue.
@oleksandr-pavlyk let me know when you're done with local testing, and I'll revert the artifact change 🙂
@leofang I have downloaded artifacts, feel free to revert the change
@conda-forge-admin, please rerender
CI is green except for the linter (which I assume has been ignored for a long time here?) @conda-forge/intel_repack merge?
Thanks to all! 🙂
Thank you all! :tada:
I think the same issue that we see with Intel MPI is also a problem for Intel OpenMP. Opened a new issue here: #59
See Step 2 & 4 of https://github.com/conda-forge/mpi-feedstock/issues/11#issuecomment-1769101283. This fix is needed to avoid hitting issues like this.
cc: @conda-forge/intel_repack, @conda-forge/mpi, @conda-forge/mpi4py, @dalcinl @ax3l @beckermr
Checklist
0
(if the version changed)conda-smithy
(Use the phrase code>@<space/conda-forge-admin, please rerender in a comment in this PR for automated rerendering)