openhpc / submissions

OpenHPC Component Submissions Project
8 stars 2 forks source link

MFEM #29

Closed acfisher closed 6 years ago

acfisher commented 6 years ago

Software Name

MFEM


Public URL

mfem.org


Technical Overview

MFEM is a lightweight, scalable, C++ library for Finite Element discretizations. Features include arbitrary order curved elements in 1D, 2D, and 3D for a variety of FE spaces including H1, H(Curl), D(Div), and L2. Galerkin, Discontinous Galerkin (DG), and Discontinous Petrov-Galerkin (DPG) approaches are supported. Parallel meshes with adaptive mesh refinement are also supported.


Latest stable version number

v3.3.2


Open-source license type

LGPL2.1


Relationship to component?

If other, please describe:


Build system

If other, please describe: Also in spack.

Does the current build system support staged path installations? For example: make install DESTIR=/tmp/foo (or equivalent)


Does component run in user space or are administrative credentials required?


Does component require post-installation configuration.

If yes, please describe briefly:


If component is selected, are you willing and able to collaborate with OpenHPC maintainers during the integration process?


Does the component include test collateral (e.g. regression/verification tests) in the publicly shipped source?

If yes, please briefly describe the intent and location of the tests. Basic build and regression testing is included and accessed through the make system (make check/make test).


Does the component have additional software dependencies (beyond compilers/MPI) that are not part of standard Linux distributions?

If yes, please list the dependencies and associated licenses. External libraries are not required, but MFEM supports optional integration with PETSc, HYPRE, SuperLU, and METIS which are all included in the current version of OpenHPC.


Does the component include online or installable documentation?

If available online, please provide URL. mgem.org (Documentation drop down)


[Optional]: Would you like to receive additional review feedback by email?

- [] yes - [x] no
tzanio commented 6 years ago

Not sure if this is relevant, but MFEM is also part of:

There are Spack and Homebrew packages for it.

adrianreber commented 6 years ago

Is there already a RPM spec file available for MFEM which has been prepared for the OpenHPC repository? If there is a spec file I could review it.

acfisher commented 6 years ago

We'll prepare one after the new year and get back to you.

crbaird commented 6 years ago

I took a stab at one:

https://build.openhpc.community/package/view_file/home:crbaird/mfem/mfem.spec?expand=1

Comments welcome.

acfisher commented 6 years ago

Wow, thanks for putting that together! The only change I would suggest is adding the dependence on superlu-dist which I believe is a part of OpenHPC. It's optional, but it will unlock the superlu solvers inside of MFEM.

tgamblin commented 6 years ago

@crbaird: how do BLAS and LAPACK get into the build for MFEM here? Do OHPC builds assume they're installed as part of the base distro? I'm just curious for my own edification -- we explicitly depend on these in Spack.

adrianreber commented 6 years ago

@tgamblin: At least hypre and petsc depend on BLAS and pull it in.

tgamblin commented 6 years ago

@acfisher: should the MFEM spec also depend on BLAS/LAPACK? Or is that not necessary since SuiteSparse isn't in here? I know in Spack it depends on BLAS and LAPACK optionally and when SuiteSparse is used.

adrianreber commented 6 years ago

From a spec file point of view it is unnecessary and not how we do it in our other spec files. If some other package pulls in something (openblas in this case) we do not list it here again.

tgamblin commented 6 years ago

Ok. TIL. It doesn't make a lot of sense to me for packages not to list all their direct dependencies (what happens if your indirect dependencies change but you actually needed them?). But this is up to you guys. Is that an OHPC convention or is that also followed by the major distros?

adrianreber commented 6 years ago

Not sure I understand your question. But listing all dependencies does not make a lot of sense and is unnecessary duplication of information if our tools (RPM) handle it. If a lower level package gets a new dependency and if we do not have to change all packages which are depending on it, I see it as a better solution than to having to update all packages to list it.

If you have an example it would maybe be easier to understand what problem you are describing.

adrianreber commented 6 years ago

Ah, I guess I get it. You mean if MFEM directly depends on openblas but it is pulled in via an implicit rule (hypre, petsc). What if hypre and petsc no longer require openblas. It will be missing. Good point.

This probably depends if the dependency chain is like this:

or rather:

and

tgamblin commented 6 years ago

Yes, I agree that listing all transitive dependencies doesn't make sense. But if in this case MFEM actually uses BLAS/LAPACK directly, I would expect it to be listed here.

Suppose:

  1. A depends on B and calls functions in B
  2. B depends on C and calls functions in C
  3. C depends on D and calls functions in D
  4. A also depends on C and calls functions in C

I'm saying A should list B and C as dependencies, even though B also lists C as a dependency, but A should not list D as a dependency. I think you are saying that A should only list B.

But if B then stops depending on C at a later date, A will break. That means your dependencies are declared wrong.

I'm asking based on the Spack package (see link above), the making of which I was loosely involved in. @acfisher would know if there really is a direct BLAS dependency or whether the conditional blas/lapack dependencies in the Spack package are there to work around something else. I vaguely recall extra transitive dependencies being added to the Spack builds to support static builds, where you unfortunately need to know lib names for your whole dependency DAG.

tgamblin commented 6 years ago

@adrianreber: Yep I think we're on the same page! I bet @acfisher is asleep but I'll keep mentioning him so he gets lots of emails in the morning 😄 .

acfisher commented 6 years ago

Indeed, I got lots of emails this morning :). Todd is correct, MFEM requires LAPACK when it is compiled with HYPRE support and HYPRE is using LAPACK. We do have some LAPACK calls when those options are enabled. I'm less sure about BLAS. A search didn't turn up any BLAS calls, but it may be included to work around another issue.

tzanio commented 6 years ago

We also have a few optional calls to LAPACK in densemat.cpp when MFEM_USE_LAPACK = YES, see

https://github.com/mfem/mfem/search?l=C%2B%2B&q=MFEM_USE_LAPACK&type=&utf8=%E2%9C%93

These are not critical though, and for most users we recommend that they use our internal implementations.

There are no direct BLAS calls in MFEM.

tgamblin commented 6 years ago

MFEM requires LAPACK when it is compiled with HYPRE support and HYPRE is using LAPACK.

Does the MFEM build detect when Hypre is compiled with LAPACK? If it does, then you the RPM spec should probably not depend on LAPACK directly, as it depends entirely on Hypre's configuration. Which is kind of a special case of what @adrianreber and I were talking about above.

From @tzanio's recommendation it sounds like OHPC should compile with MFEM_USE_LAPACK = NO.

Just an observation: based on both comments, it sounds like MFEM can use LAPACK when MFEM_USE_LAPACK = NO, if MFEM is compiled with Hypre and Hypre uses LAPACK 🤔 . Not the clearest semantics 😄 .

crbaird commented 6 years ago

Thanks, @acfisher. We had an issue keeping us on a crusty old version of superlu_dist that has now been resolved.

acfisher commented 6 years ago

Sorry, I misread a comment in the MFEM build system. We do not require LAPACK unless we turn on MFEM_USE_LAPACK.

adrianreber commented 6 years ago

Looking at the spec file it probably needs a correct URL line:

Source0: http://mfem.github.io/releases/%{pname}-%{version}.tgz

and it does not build for me. Lot's of errors like:

linalg/petsc.cpp: In member function 'void mfem::PetscBDDCSolver::BDDCSolverConstructor(const mfem::PetscBDDCSolverParams&)':
linalg/petsc.cpp:2430:20: error: 'PCBDDCSetDiscreteGradient' was not declared in this scope
             ierr = PCBDDCSetDiscreteGradient(pc,*G,p,0,PETSC_TRUE,conforming);
tzanio commented 6 years ago

Hi @adrianreber,

Our tarballs can indeed be found in URL of that form. When possible, we prefer to use the goo.gl shorlinks from http://mfem.org/download/, as these allow us at least some level download analytics. See for example http://goo.gl/Vrpsns+.

The PETSc support is relatively recent and requires PETSc version 3.8, for example PCBDDCSetDiscreteGradient is defined in petsc-3.8.3/src/ksp/pc/impls/bddc/bddc.c. Which version do you use?

Tzanio

adrianreber commented 6 years ago

Thanks for the petsc explanation. I was building against OpenHPC 1.3.3 which includes an older version. Good to know. I will try it with a newer version of petsc.

The goo.gl links unfortunately do not work with RPM SPEC files as it needs the tarball after the last /. But I guess that the downloads from the few people rebuilding the RPM from the OpenHPC spec is minimal.

adrianreber commented 6 years ago

The current development branch (1.3.4) currently has dependency problems. Will try it once that is fixed.

koomie commented 6 years ago

Thank you for the submission and extra details on follow up questions. The TSC has recommended acceptance of MFEM. The starting target for inclusion is to land on the OpenHPC 1.3.4 branch.

tzanio commented 6 years ago

Thanks @koomie, @adrianreber, @crbaird and @acfisher!

acfisher commented 6 years ago

Great news. Thanks!