geo-fluid-dynamics / phaseflow-fenics

Phaseflow simulates the convection-coupled melting and solidification of phase-change materials.
MIT License
52 stars 23 forks source link

Consider using fenics-hpc instead of fenics #133

Open agzimmerman opened 6 years ago

agzimmerman commented 6 years ago

Who is actually scaling fenics?

Do the fenics and fenics-hpc developers coordinate/overlap? Are changes ever pushed from fenics-hpc to fenics?

Why does the HP-FEM MOOC use fenics-hpc and dolfin 1.6 instead of an up-to-date fenics?

The fenicsproject welcome page says

Each component of the FEniCS platform has been fundamentally designed for parallel processing. Executing a FEniCS script in parallel is as simple as calling mpirun -np 64 python script.py. This framework allows for rapid prototyping of finite element formulations and solvers on laptops and workstations, and the same code may then be deployed on large high-performance computers.

The figure shows the von Mises stresses computed from a nonlinear thermomechanical FEniCS simulation of a turbocharger. The finite element system of linear equations comprises more than 3.3 x 109 degrees of freedom. The solver was initially developed on a desktop computer for a small scale problem, and the same code was then deployed on a supercomputer using over 24000 parallel processes.

This instantly convinced me when I first looked into fenics, but now I'm starting to worry. I've already run into a major component which explicitly throws a "NotImplemented" error when running with MPI, and I've found reports that the assembly routine does not scale well with OpenMP.

agzimmerman commented 6 years ago

There is a FEniCS-HPC paper.

agzimmerman commented 6 years ago

The paper is quite short. It has this to say about the HPC aspects of FEniCS-HPC:

Besides the code generation part, a key component of FEniCS is the Object-Oriented finite element library DOLFIN [17], from which we have developed a high performance branch DOLFIN-HPC [12], optimized for distributed memory architectures. DOLFIN handles mesh representation and assembly of weak forms but relies on external libraries for solving the linear systems. Our high performance branch extends DOLFIN with a fully distributed mesh, parallel adaptive mesh refinement, and predictive dynamic load balancing capabilities [14].

The parallelization strategy within DOLFIN-HPC is based on an element wise distribution, given from the dual graph of the underlying computational mesh. To minimize data dependencies during finite element assembly, whole elements is assigned to each processing elements (PE), and the overlap between PEs are represented as ghosted entities. Thus, assembling the stiffness matrix in every time-step can be performed in a straightforward way. Each PE computes the local stiffness matrix of its elements, and add them to the global matrix. For the linear solvers, a row-wise distribution of matrices is assumed, which directly maps to our element wise distribution.

DOLFIN-HPC is written in C++, and is parallelized using either flat MPI or hybrid MPI + PGAS [13]. The framework has proven to scale well on a wide range of architectures, even for very latency sensitive kernels with the addition of the hybrid parallelization

There are no direct performance comparisons to FEniCS itself.

agzimmerman commented 6 years ago

Perhaps their reference [12] will help:

@phdthesis{jansson2013high, title={High performance adaptive finite element methods: with applications in aerodynamics}, author={Jansson, Niclas}, year={2013}, school={KTH Royal Institute of Technology} }

Here's the full text.

agzimmerman commented 6 years ago

Just from reading the abstract, it appears they might have something like a bisection tree. "We present efficient data structures and data decomposition methods for distributed unstructured tetrahedral meshes. Our work also concerns an efficient parallelization of local mesh refinement methods such as recursive longest edge bisection, and the development of an a priori predictive dynamic load balancing method, based on a weighted dual graph."

agzimmerman commented 6 years ago

The thesis also isn't clarifying the issue for me. I think my best course of action will be to ask the teachers of the FEniCS MOOC to explain why they're using FEniCS-HPC instead of FEniCS for part two of their course. I'll wait until a few weeks into the course, after I've completed some modules.

agzimmerman commented 6 years ago

I did not continue with the FEniCS MOOC, and have yet to investigate this question further.

Now that FEniCS is running on JURECA, we should see how well it performs first-hand. Still in parallel I would like to discuss FEniCS vs. FEniCS-HPC this with the developers.

agzimmerman commented 6 years ago

I e-mailed the FEniCS-HPC developers at dev@fenics-hpc.org, per the direction from their website.

Subject: Swapping FEniCS for FEniCS-HPC

Message:

FEniCS-HPC team:

I am interested in utilizing your library.

First, some background:

I have been working with FEniCS for nearly a year now. My project, Phaseflow, is at https://github.com/geo-fluid-dynamics/phaseflow-fenics . I've had an eye vaguely on FEniCS-HPC as I progress, since I heavily rely on the fenics.AdaptiveNonlinearVariationalSolver with goal-oriented AMR, since I would like to at some point have a performant and scalable implementation.

Depending on how much progress I make this year, I may submit a compute time proposal for the JURECA supercomputing cluster in nearby Juelich, where I already have the fenics version of my project running (though this isn't very useful, since the adaptive solver doesn't work in parallel).

Now, I have some questions, if you would be so kind:

  1. To what extent can an existing project which uses the latest stable version of fenics expect to easily swap this for fenics-hpc? Is this part of the fenics-hpc design goals? In other words, are there major differences in the API's?

  2. I have seen from the website and the MOOC tends to focus on turbulent compressible aerodynamics. To what extent is fenics-hpc still a general purpose FEM library?

  3. Coincidentally, I see that there is a FEniCS-HPC paper in the context of the JARA initiative. Are any of that paper's authors on this mailing list? https://link.springer.com/chapter/10.1007/978-3-319-53862-4_6

I'll be at the FEniCS conference next week, and would be happy to talk to anyone there who works on HPC.

Thanks,

Alex

Alexander G. Zimmerman, M.Sc. Doctoral Candidate, AICES, RWTH Aachen University LinkedIn, ResearchGate, GitHub Mobile: +49 176 68275339

agzimmerman commented 6 years ago

The FEniCS-HPC team responded. They confirmed that FEniCS-HPC is still general purpose. Also, though, they have stopped supporting the Python interface: "The two branches diverged around DOLFIN 0.8.x, so the DOLFIN interface is an older version. However, the UFL notation is the same, and this is what we're trying to exploit as much as possible, and you could too. The Python interface has also been disabled in FEniCS-HPC, since it was not possible to use it on supercomputers for many years. Our workflow is typically to write a simple Python prototype in Python-FEniCS, and then use the UFL with a simple C++ DOLFIN-HPC wrapper in FEniCS-HPC."

massimiliano-leoni commented 5 years ago

Hi! Am I right that we met at the FEniCS conference and discussed this? Was it any help in clearing this matter?

agzimmerman commented 5 years ago

@massimiliano-leoni , yes I remember meeting you at the conference :) I've corresponded with Johan a couple of times since then. I might participate in the testing effort for MSO4SC, which would be my first time using FEniCS-HPC.

Thanks for pointing out that the "question" part of this is resolved. Now it's just on my long list of things to try when there is time, or to have someone else try.