Closed acfisher closed 6 years ago
Not sure if this is relevant, but MFEM is also part of:
Is there already a RPM spec file available for MFEM which has been prepared for the OpenHPC repository? If there is a spec file I could review it.
We'll prepare one after the new year and get back to you.
I took a stab at one:
https://build.openhpc.community/package/view_file/home:crbaird/mfem/mfem.spec?expand=1
Comments welcome.
Wow, thanks for putting that together! The only change I would suggest is adding the dependence on superlu-dist which I believe is a part of OpenHPC. It's optional, but it will unlock the superlu solvers inside of MFEM.
@crbaird: how do BLAS and LAPACK get into the build for MFEM here? Do OHPC builds assume they're installed as part of the base distro? I'm just curious for my own edification -- we explicitly depend on these in Spack.
@tgamblin: At least hypre and petsc depend on BLAS and pull it in.
@acfisher: should the MFEM spec also depend on BLAS/LAPACK? Or is that not necessary since SuiteSparse isn't in here? I know in Spack it depends on BLAS and LAPACK optionally and when SuiteSparse is used.
From a spec file point of view it is unnecessary and not how we do it in our other spec files. If some other package pulls in something (openblas in this case) we do not list it here again.
Ok. TIL. It doesn't make a lot of sense to me for packages not to list all their direct dependencies (what happens if your indirect dependencies change but you actually needed them?). But this is up to you guys. Is that an OHPC convention or is that also followed by the major distros?
Not sure I understand your question. But listing all dependencies does not make a lot of sense and is unnecessary duplication of information if our tools (RPM) handle it. If a lower level package gets a new dependency and if we do not have to change all packages which are depending on it, I see it as a better solution than to having to update all packages to list it.
If you have an example it would maybe be easier to understand what problem you are describing.
Ah, I guess I get it. You mean if MFEM directly depends on openblas but it is pulled in via an implicit rule (hypre, petsc). What if hypre and petsc no longer require openblas. It will be missing. Good point.
This probably depends if the dependency chain is like this:
or rather:
and
Yes, I agree that listing all transitive dependencies doesn't make sense. But if in this case MFEM actually uses BLAS/LAPACK directly, I would expect it to be listed here.
Suppose:
I'm saying A should list B and C as dependencies, even though B also lists C as a dependency, but A should not list D as a dependency. I think you are saying that A should only list B.
But if B then stops depending on C at a later date, A will break. That means your dependencies are declared wrong.
I'm asking based on the Spack package (see link above), the making of which I was loosely involved in. @acfisher would know if there really is a direct BLAS dependency or whether the conditional blas/lapack dependencies in the Spack package are there to work around something else. I vaguely recall extra transitive dependencies being added to the Spack builds to support static builds, where you unfortunately need to know lib names for your whole dependency DAG.
@adrianreber: Yep I think we're on the same page! I bet @acfisher is asleep but I'll keep mentioning him so he gets lots of emails in the morning 😄 .
Indeed, I got lots of emails this morning :). Todd is correct, MFEM requires LAPACK when it is compiled with HYPRE support and HYPRE is using LAPACK. We do have some LAPACK calls when those options are enabled. I'm less sure about BLAS. A search didn't turn up any BLAS calls, but it may be included to work around another issue.
We also have a few optional calls to LAPACK in densemat.cpp
when MFEM_USE_LAPACK = YES
, see
https://github.com/mfem/mfem/search?l=C%2B%2B&q=MFEM_USE_LAPACK&type=&utf8=%E2%9C%93
These are not critical though, and for most users we recommend that they use our internal implementations.
There are no direct BLAS calls in MFEM.
MFEM requires LAPACK when it is compiled with HYPRE support and HYPRE is using LAPACK.
Does the MFEM build detect when Hypre is compiled with LAPACK? If it does, then you the RPM spec should probably not depend on LAPACK directly, as it depends entirely on Hypre's configuration. Which is kind of a special case of what @adrianreber and I were talking about above.
From @tzanio's recommendation it sounds like OHPC should compile with MFEM_USE_LAPACK = NO
.
Just an observation: based on both comments, it sounds like MFEM can use LAPACK when MFEM_USE_LAPACK = NO
, if MFEM is compiled with Hypre and Hypre uses LAPACK 🤔 . Not the clearest semantics 😄 .
Thanks, @acfisher. We had an issue keeping us on a crusty old version of superlu_dist that has now been resolved.
Sorry, I misread a comment in the MFEM build system. We do not require LAPACK unless we turn on MFEM_USE_LAPACK.
Looking at the spec file it probably needs a correct URL line:
Source0: http://mfem.github.io/releases/%{pname}-%{version}.tgz
and it does not build for me. Lot's of errors like:
linalg/petsc.cpp: In member function 'void mfem::PetscBDDCSolver::BDDCSolverConstructor(const mfem::PetscBDDCSolverParams&)':
linalg/petsc.cpp:2430:20: error: 'PCBDDCSetDiscreteGradient' was not declared in this scope
ierr = PCBDDCSetDiscreteGradient(pc,*G,p,0,PETSC_TRUE,conforming);
Hi @adrianreber,
Our tarballs can indeed be found in URL of that form. When possible, we prefer to use the goo.gl shorlinks from http://mfem.org/download/, as these allow us at least some level download analytics. See for example http://goo.gl/Vrpsns+.
The PETSc support is relatively recent and requires PETSc version 3.8, for example PCBDDCSetDiscreteGradient
is defined in petsc-3.8.3/src/ksp/pc/impls/bddc/bddc.c
. Which version do you use?
Tzanio
Thanks for the petsc explanation. I was building against OpenHPC 1.3.3 which includes an older version. Good to know. I will try it with a newer version of petsc.
The goo.gl links unfortunately do not work with RPM SPEC files as it needs the tarball after the last /. But I guess that the downloads from the few people rebuilding the RPM from the OpenHPC spec is minimal.
The current development branch (1.3.4) currently has dependency problems. Will try it once that is fixed.
Thank you for the submission and extra details on follow up questions. The TSC has recommended acceptance of MFEM. The starting target for inclusion is to land on the OpenHPC 1.3.4 branch.
Thanks @koomie, @adrianreber, @crbaird and @acfisher!
Great news. Thanks!
Software Name
MFEM
Public URL
mfem.org
Technical Overview
MFEM is a lightweight, scalable, C++ library for Finite Element discretizations. Features include arbitrary order curved elements in 1D, 2D, and 3D for a variety of FE spaces including H1, H(Curl), D(Div), and L2. Galerkin, Discontinous Galerkin (DG), and Discontinous Petrov-Galerkin (DPG) approaches are supported. Parallel meshes with adaptive mesh refinement are also supported.
Latest stable version number
v3.3.2
Open-source license type
LGPL2.1
Relationship to component?
If other, please describe:
Build system
If other, please describe: Also in spack.
Does the current build system support staged path installations? For example:
make install DESTIR=/tmp/foo
(or equivalent)Does component run in user space or are administrative credentials required?
Does component require post-installation configuration.
If yes, please describe briefly:
If component is selected, are you willing and able to collaborate with OpenHPC maintainers during the integration process?
Does the component include test collateral (e.g. regression/verification tests) in the publicly shipped source?
If yes, please briefly describe the intent and location of the tests. Basic build and regression testing is included and accessed through the make system (make check/make test).
Does the component have additional software dependencies (beyond compilers/MPI) that are not part of standard Linux distributions?
If yes, please list the dependencies and associated licenses. External libraries are not required, but MFEM supports optional integration with PETSc, HYPRE, SuperLU, and METIS which are all included in the current version of OpenHPC.
Does the component include online or installable documentation?
If available online, please provide URL. mgem.org (Documentation drop down)
[Optional]: Would you like to receive additional review feedback by email?
- [] yes - [x] no