Closed napetrov closed 1 year ago
Hi! This is the friendly automated conda-forge-linting service.
I wanted to let you know that I linted all conda-recipes in your PR (recipe
) and found some lint.
Here's what I've got...
For recipe:
@conda-forge-admin, please rerender
@conda-forge-admin, please rerender
@napetrov I always found the impi_rt
name a bit ugly. Should we really follow that pattern here? That is not the convention for other packages in this repack feedstock. Perhaps you should actually output impi
, impi-include
, impi-devel
?
@napetrov I always found the
impi_rt
name a bit ugly. Should we really follow that pattern here? That is not the convention for other packages in this repack feedstock. Perhaps you should actually outputimpi
,impi-include
,impi-devel
?
This feedstock is doing as is copy, so I think messing name here would case more problems. But agree on a bit strange _rt naming
@napetrov I always found the
impi_rt
name a bit ugly. Should we really follow that pattern here? That is not the convention for other packages in this repack feedstock. Perhaps you should actually outputimpi
,impi-include
,impi-devel
?This feedstock is doing as is copy, so I think messing name here would case more problems. But agree on a bit strange _rt naming
Maybe add an additional impi
output as a metapackage for the concrete impi_rt
?
@napetrov I always found the
impi_rt
name a bit ugly. Should we really follow that pattern here? That is not the convention for other packages in this repack feedstock. Perhaps you should actually outputimpi
,impi-include
,impi-devel
?This feedstock is doing as is copy, so I think messing name here would case more problems. But agree on a bit strange _rt naming
Maybe add an additional
impi
output as a metapackage for the concreteimpi_rt
?
i can pass feedback to Intel MPI team, but first renames or additional names should be added to origin in Intel channel. I would like to keep repack feedstock as just repack without additional functionality
@isuruf @oleksandr-pavlyk - this one is ready for merge
@dalcinl - thanks for catching!
I know it is off-topic for this PR, but I'm wondering why the version
and buildnum
variables do not have a dal_
prefix.
I know it is off-topic for this PR, but I'm wondering why the
version
andbuildnum
variables do not have adal_
prefix.
This would be thing to cleanup and fix(in separate PR) - but basically it's results of Conway's law. DAL team need this for pushing oneDAL repacks and other components are just get updated together with oneDAL updates. So with set of historical reasons this is how things evolved.
@napetrov Your test commands do not look right to me, they are too generic. They probably need to be different for Linux and Windows as well. Let me take a look.
@napetrov Your test commands do not look right to me, they are too generic. They probably need to be different for Linux and Windows as well. Let me take a look.
Thanks, yes this is verry welcome. There was usually limited interest in getting things checked after repack as most of validation happens for packages at Intel repo. But there is for sure set of things that can be checked to validate potential impact of repackaging
I just added the improved tests as suggestions. I have not actually tried them myself as I don't have a Windows box at hand, let's see how it goes.
@napetrov Again, off-topic: Do you have direct access to Intel MPI dev/maintenance team? I've tried to communicate with Intel about issues with the Intel MPI implementation using the official community channel, but the experience was high-latency and thus a bit of a pain. There are some outstanding issues with mpi4py on Linux and Windows, a few minor details in the conda packages (e.g. missing mpifort
compiler wrapper to make things compatible with mpich/openmpi). If you know any of the interested parties, please tell them I would like to get a more direct and straightforward interaction to work on improving the experience of my users and their users.
@napetrov Again, off-topic: Do you have direct access to Intel MPI dev/maintenance team? I've tried to communicate with Intel about issues with the Intel MPI implementation using the official community channel, but the experience was high-latency and thus a bit of a pain. There are some outstanding issues with mpi4py on Linux and Windows, a few minor details in the conda packages (e.g. missing
mpifort
compiler wrapper to make things compatible with mpich/openmpi). If you know any of the interested parties, please tell them I would like to get a more direct and straightforward interaction to work on improving the experience of my users and their users.
Done.
I prefer intel-mpi
instead of impi
. I had no idea of this abbreviation and that's what I searched for in anaconda.org when trying to find this package.
I prefer
intel-mpi
instead ofimpi
. I had no idea of this abbreviation and that's what I searched for in anaconda.org when trying to find this package.
Well, "impi" and "IMPI" are abbreviation/prefixes that intel has used forever to refer to its MPI implementation , and it in line with what other vendors have used in the past (e.g. hpmpi). We have "msmpi" in conda-forge as our own precedent of using such abbreviations. "impi" is even used a the dll name on Windows. Any actual use of Intel MPI would be pretty much used to this abbreviation.
@napetrov @isuruf Something this not went well. The version number of the package in conda-forge is wrong, it does not match the one in the original intel
channel.
https://anaconda.org/intel/impi-devel/files
https://anaconda.org/conda-forge/impi-devel/files
@napetrov The issue is obvious: the new outputs are missing the version: {{ impi_version }}
entry. The number: {{ impi_buildnum }}
entry is also missing.
Checklist
0
(if the version changed)conda-smithy
(Use the phrase code>@<space/conda-forge-admin, please rerender in a comment in this PR for automated rerendering)