It4innovations / espreso

release repository for ESPRESO
Other
20 stars 7 forks source link

Question about Very large scale mesh import #4

Open altya opened 4 years ago

altya commented 4 years ago

Dear espreso developer,

Our team has done some large-scale efficiency tests using espreso. But so far, our tests have been limited to mesh generated using internal generators.

I noticed that the mesh external import approach use a full grid file, instead of a list of partition mesh. When dealing with very large scale mesh, in our case, we need to solve a mesh of 10 billion level, can the memory of a single machine read such a large grid file? Whether it takes a lot of time to distribute the mesh to different nodes?

best regards, Benjamin

mec059 commented 4 years ago

Dear Benjamin,

Espreso contains a parallel loader that is based on the algorithm described here: https://ieeexplore.ieee.org/document/8820782. The paper contains the description of parallel loading of Ansys CDB database files. Currently Espreso supports also VTK Legacy, XDMF, and EnSight files.

In our approach we read a sequential mesh database and compute a decomposition by ParMetis on the fly, instead of generating separate files for each MPI processes before the computation. Hence, you are able to use the same file that is produced by a mesh generator for an arbitrary number of MPI processes. Since each MPI process loads only a part of the file and the mesh is never gathered to a single process, it is possible to load such large meshes that you need very fast (e.g. a mesh with 800mil. nodes and 500mil. elements in 20s using 3000 MPI processes).

Best regards, Ondrej