cb-geo / mpm

CB-Geo High-Performance Material Point Method
https://www.cb-geo.com/research/mpm
Other
238 stars 82 forks source link

[Solver] Two-Phase One-Point Explicit #680

Open tianchiTJ opened 3 years ago

tianchiTJ commented 3 years ago

Describe the PR This PR is to implement the two-phase one-layer explicit solver for fully saturated soil simulation. One particle includes both solid phase and liquid phase, which are defined by the corresponding volume fractions (n_s and n_w). The theory of porous media is adopted to set up the governing equations (momentum balance and mass balance). The pore pressure is updated based on the strain of the solid skeleton, and a weakly compressible fluid assumption is assumed to solve the fluid and mixture balance equations. All implementations work coherently with the current MPI and OpenMP parallelization features.

New implementation TwoPhase Particle

  1. New class ParticleTwoPhase derived from Particle, which include all necessary functions for two-phase computation and new variables for two-phase (etc. pore pressure, porosity)
  2. New hdf5 structure for ParticleTwoPhase (also includes refactoring of base hdf5 class)

MPMExplicitTwoPhase

  1. New class MPMExplicitTwoPhase derived from MPMBase, which includes the main loop for two-phase computation.
  2. New input functions in MPMBase to read the new types of input files for two-phase.

TwoPhase functions for nodes

  1. New functions for the two-phase are implemented in the current Node class directly (see node_multiphase.tcc)

Free surface detection functions in mesh

  1. New functions to detect the changing surface are implemented in the Mesh class (see mesh_multiphase.tcc)

Unit tests and benchmarks All new implementations are validated by corresponding test functions, and the unit-test for two-phase computation is also included. Related 2D and 3D one-dimensional consolidation tests can be found here: https://github.com/cb-geo/mpm-benchmarks/pull/19.

codecov[bot] commented 3 years ago

Codecov Report

Merging #680 (2ad9bc2) into develop (c9630b3) will increase coverage by 12.44%. The diff coverage is 90.53%.

Impacted file tree graph

@@             Coverage Diff              @@
##           develop     #680       +/-   ##
============================================
+ Coverage    83.91%   96.34%   +12.44%     
============================================
  Files          169      147       -22     
  Lines        34176    32252     -1924     
============================================
+ Hits         28676    31073     +2397     
+ Misses        5500     1179     -4321     
Impacted Files Coverage Δ
include/io/io_mesh.h 100.00% <ø> (ø)
include/io/io_mesh_ascii.h 100.00% <ø> (ø)
include/loads_bcs/constraints.h 100.00% <ø> (ø)
include/mesh.h 100.00% <ø> (ø)
include/node_base.h 100.00% <ø> (ø)
include/solvers/mpm.h 100.00% <ø> (ø)
include/solvers/mpm_base.h 50.00% <ø> (ø)
src/io/partio_writer.cc 100.00% <ø> (ø)
tests/graph_test.cc 100.00% <ø> (ø)
tests/interface_test.cc 100.00% <ø> (ø)
... and 359 more

Continue to review full report at Codecov.

Legend - Click here to learn more Δ = absolute <relative> (impact), ø = not affected, ? = missing data Powered by Codecov. Last update c9630b3...2ad9bc2. Read the comment docs.

tianchiTJ commented 3 years ago

@kks32 I think the hdf5 is the only part left for the implementation of twophase. After finish it, I think we can merge it

bodhinandach commented 3 years ago

@kks32 @tianchiTJ Current problems that we have with deriving HDF5Particle to HDF5ParticleTwoPhase are the following:

  1. Initialise_particle function has HDF5Particle type and only require 1 material: image By using the function above, we will have the following compiling errors: image I am thinking to overload that function that accept two materials and HDF5ParticleTwoPhase: image or at least a vector of materials: image and thus also need the subsequent overloaded function, image But this will require us to distinguish different particle initialization (Might change some part in MPI transfer and resume and require some if conditions). Any further thoughts from you guys?

  2. The return hdf5() function also has a problem. It does not give any compiler error by having HDF5Particle as the return type, since it is the base struct: image However, from outside the class, only the variables available in HDF5Particle struct can be accessed. All specific variables for the HDF5ParticleTwoPhase are gone. For this case, I don't think we can change the return type without changing the name of the function, we might need to come up with: image Any suggestions to fix this?

Update

Current implementation assumes these functions to initialise and output hdf5 for twophase: image image

kks32 commented 3 years ago

@bodhinandach Could you point me to where can I access the part of the code that doesn't compile? I can't see the derived TwoPhase HDF5 struct

bodhinandach commented 3 years ago

@kks32 Will that be okay for me to open a new branch so that you can compare? I don't want to push it here yet cause I haven't find a better way to fix it.

kks32 commented 3 years ago

@bodhinandach you can create a PR to this branch from your personal repo.

bodhinandach commented 3 years ago

@kks32 @tianchiTJ After some thought, I think it is okay to push it here, I don't have any private repo, so it's not so convenient for me to move branches from a different repo. I decided to make the push here, please have a look.

In this branch, you can now find the derived HDF5ParticleTwoPhase struct and other necessary files related to MPI_DataType. Please check the following files:

  1. particle_twophase.tcc: particularly these lines.
  2. hdf5_particle_twophase.h and hdf5_particle_twophase.cc
  3. mpi_datatypes_twophase.h: I overloaded the MPI_Datatype register_mpi_particle_type() function with different input argument.
  4. particle_twophase_test.cc and mpi_particle_twophase_test.cc

Few notes from my side:

  1. The current implementation only add the new hdf5 struct to handle twophase particle. I haven't change any function in mpm_base or mesh to write, read, or perform any mpi data transfer. This is the next step, but I would like to discuss with you on this first.
  2. The particle_twophase_test.cc and mpi_particle_twophase_test.cc tests are working in serial and parallel, meaning that the current HDF5ParticleTwoPhase struct can be used for MPI data transfer, output hdf5, and to initialize particle using hdf5.
  3. There is unsolved warnings while compiling the code (this is really ugly, but I don't know how to remove it). Any idea @kks32 @tianchiTJ image

I also run the consolidation test in 3D and that works okay in parallel: image

My next move is to create or enhance the current functions in mpm_base and mesh to write, read, and perform runtime mpi data transfer. I am happy to discuss with you in case you have any suggestions.

kks32 commented 3 years ago
2. [`hdf5_particle_twophase.h`](https://github.com/cb-geo/mpm/blob/b208bf41928c72ca6f478bae11e31a556eb19cfe/include/hdf5_particle_twophase.h) and [`hdf5_particle_twophase.cc`](https://github.com/cb-geo/mpm/blob/b208bf41928c72ca6f478bae11e31a556eb19cfe/src/hdf5_particle_twophase.cc) 
3. [`mpi_datatypes_twophase.h`](https://github.com/cb-geo/mpm/blob/b208bf41928c72ca6f478bae11e31a556eb19cfe/include/mpi_datatypes_twophase.h): I overloaded the MPI_Datatype `register_mpi_particle_type()` function with different input argument.
3. There is unsolved warnings while compiling the code (this is really ugly, but I don't know how to remove it). Any idea @kks32 @tianchiTJ
   ![image](https://user-images.githubusercontent.com/37140224/90203132-ba606800-dd94-11ea-90f1-fb27485a2ad1.png)

I see you have separate functions for hdf5() and hdf5_twophase(). If that's the case you may be better of creating a separate POD for two-phase instead of deriving from the single-phase particle POD. This will avoid the warnings you get for offset.

  // Array containing the length of each block
  int lengths[nblocks] = {1, 1, 1, 1,  1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,
                          1, 1, 1, 1,  1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,
                          1, 1, 1, 20, 1, 1, 1, 1, 1, 1, 1, 1, 5};

The state variables are still set at 20, this is not right if you have two-phase system with state variables for 2 different materials.

However, this design is different from the one we discussed of overloading hdf5 calls and derived types, now we have virtual functions to write PODs for each particle type.

kks32 commented 3 years ago

@kks32 @tianchiTJ Current problems that we have with deriving HDF5Particle to HDF5ParticleTwoPhase are the following:

  1. Initialise_particle function has HDF5Particle type and only require 1 material: image By using the function above, we will have the following compiling errors: image

The issue is because you get a base class pointer. You need to upcast to get the derived data types.

bodhinandach commented 3 years ago

Just to respond to your comment @kks32

I see you have separate functions for hdf5() and hdf5_twophase(). If that's the case you may be better of creating a separate POD for two-phase instead of deriving from the single-phase particle POD. This will avoid the warnings you get for offset.

I don't think C++ allows us to use the same function name for different return type, so that's the only way we can go.

The state variables are still set at 20, this is not right if you have two-phase system with state variables for 2 different materials.

Yes, that's for the first phase. The second phase, which is the fluid case is assumed to have lower number of state variables. If you can see above, I have added the liquid_statevars with size of 5.

However, this design is different from the one we discussed of overloading hdf5 calls and derived types, now we have virtual functions to write PODs for each particle type.

Yeah, I am trying to manage that, but I can't. I can just remove the derivation, but let's wait until today's discussion.

The issue is because you get a base class pointer. You need to upcast to get the derived data types.

I see. Let me give it a try. I might need to change the original function though. Let's see.

bodhinandach commented 3 years ago

Just to report on my experiment using reinterpret_cast.

  1. Using reinterpret_cast to cast the returned HDF5Particle from hdf5() function to HDF5ParticleTwoPhase pointer image Results: It does not work, as the even though we can cast h5_test_temp into h5_test, h5_test_temp does not contain the variables specific to HDF5ParticleTwoPhase anymore. image

  2. Using reinterpret_cast to cast the passed 'HDF5Particle' in the initialise_particle() function to HDF5ParticleTwoPhase pointer image Results: It works! Though, we cannot have a constexpr in the function first argument (see line 118), otherwise the code will not compile. See: https://en.cppreference.com/w/cpp/language/constant_expression

kks32 commented 3 years ago
1. Using `reinterpret_cast` to cast the returned `HDF5Particle` from `hdf5()` function to `HDF5ParticleTwoPhase` pointer
   It does not work, as the even though we can cast `h5_test_temp` into `h5_test`, `h5_test_temp` does not contain the variables specific to `HDF5ParticleTwoPhase` anymore.

This depends on how the HDF5POD is created. Are you creating a base class object and point to a derived class, then fill those details and return a base class pointer, it should work. Could you share the lines of code corresponding to this call?

kks32 commented 3 years ago

Thanks @bodhinandach for the work on HDF5 POD on two phase.

To handle HDF5 POD returns in derived Particle and ParticleTwoPhase, we need to return a pointer. The initial design considered a void* return type, however, this design will have issues as data type returned is of local in nature. This was further fixed by using std::shared_ptr<void> this avoids the problem returning a local variable and the object lifetime is controlled by the shared_ptr<void>. shared_ptr<void> deletes the right object type and size, although it's of type void. However, this design is a bit gross. Final design:

Base Class

//! Retrun particle data as POD
//! \retval pod POD of the particle
virtual std::shared_ptr<mpm::PODParticle> pod() = 0;

Derived class.h

//! Retrun particle data as POD
//! \retval pod POD of the particle
std::shared_ptr<mpm::PODParticle> pod() override;

Derived Class impl

//! Return particle data in HDF5 format
template <unsigned Tdim>
std::shared_ptr<mpm::PODParticle> mpm::TwoPhaseParticle<Tdim>::pod() {
auto particle_data = std::make_shared<mpm::PODParticleTwoPhase>();
return particle_data;
}

Implementation

auto pod_test = std::static_pointer_cast<mpm::PODParticleTwoPhase>(
particle->pod());

It may be better to call the current HDF5Particle as PODParticle that has a better naming to define what it is doing.

A version of this design is available at: https://github.com/kks32/mpm/tree/solver/two-phase-modify-hdf5 and https://github.com/bodhinandach/mpm/compare/solver/two-phase-modify-hdf5...kks32:solver/two-phase-modify-hdf5

bodhinandach commented 3 years ago

@kks32 @tianchiTJ There are few things added in my few latest commits, I will list them one by one here:

Modified and working

  1. HDF5 has been changed to POD according to @kks32 request. Now in particle, we have a function:
    //! Return particle data as POD
    //! \retval particle POD of the particle
    virtual std::shared_ptr<void> pod() = 0;

    and

    //! Initialise particle POD data
    //! \param[in] particle POD data of particle
    //! \retval status Status of reading POD particle
    virtual bool initialise_particle(PODParticle& particle) = 0;
    //! Initialise particle POD data and material
    //! \param[in] particle POD data of particle
    //! \param[in] materials Material associated with the particle arranged in vector
    //! \retval status Status of reading POD particle
    virtual bool initialise_particle(
      PODParticle& particle,
      const std::vector<std::shared_ptr<Material<Tdim>>>& materials) = 0;

    The last function is refactored to use materials vector as input to accommodate multiple materials for twophase particle. These functions are working good. However, note that we need to specify which type to cast in the implementation, i.e. for singe-phase particle: auto pod_test = std::static_pointer_cast<mpm::PODParticle>(particle->pod());

and for two-phase particle auto pod_test = std::static_pointer_cast<mpm::PODParticleTwoPhase>(particle->pod());

The test for this is available here.

  1. MPI serialize, deserialize, and compute_pack_size have been derived (see here) and tested (see here). I also tested to run it together with dynamic load balancing and it was working good.

  2. I also merged the current branch with develop, and thus, the modification made in PR #685 has been considered. We can modify the two-phase solver by using MPMScheme after this PR is merged.

Modified and currently not working

  1. I made a modification on the write_particles_hdf5 and read_particles_hdf5 in mesh.tcc to write and read hdf5 data from particle POD. In order to accommodate the particle type, first, I introduced particle_types_ in mpm_base to store what type of particle involved in the simulation.
    //! Particle types
    std::set<std::string> particle_types_;

    This set is populated during the initialise_particles function here. The idea is that, by knowing what type of particles involved, we can output hdf5 correctly depends on the type. For that, I added an overload function of nparticles() to count the number of particles associated with a passed type here.

The main functions to modify are the write and read functions, and let me go here one by one.

Need to be modified

  1. Need to modify write_partio function as well for different particle type. I will do it as soon as point 4 is addressed.
kks32 commented 3 years ago

@bodhinandach thanks for the detailed explanation. I was thinking of adding an iteration inside the write_hdf5() to iterate over all available particle types. This way you can have a list of different base particle type PODs and values and use them?

Again, could we adopt a similar structure for read hdf5 particles so we can iterate over available types?

bodhinandach commented 3 years ago

@kks32 I am not sure I understand your idea correctly. If we are looping for all particle types, the cost will be types x nparticles. Wouldn't that be too expensive? I can pass the created particle_types_ to the read and write function, but the problem with respect to the PODParticle vector and mpm::pod::particle will still be there as pointed above.

kks32 commented 3 years ago

@bodhinandach sorry I meant iterate through the available particle types in the simulation, like you have in particle_types_ .

I'd suggest using a struct to store information on particle types like the number of data frames, data types, etc. You can't iterate over a namespace. It's easier to do that over objects.

bodhinandach commented 3 years ago

@kks32 I take some of the functions out to mesh_multiphase.tcc as you suggested. Will move the other functions related to two-phase out in the semi-implicit and 2P2P too

bodhinandach commented 3 years ago

@kks32 Thanks for your time reviewing this PR. It looks much better now. I partially modified the PR according to your comments. Can you spend some time to check it again?

bodhinandach commented 3 years ago

ping @kks32, any further issues with this branch. Other than dynamic load balancing and resume feature in MPI, I am good.

bodhinandach commented 3 years ago

@kks32 Some missing features which are not available in MPI while running 2Phase:

  1. compute_free_surface_by_geometry: Please use compute_free_surface_by_density instead. You can change the setting below from the input json:
    "free_surface_detection": {
    "type": "density",
    "volume_tolerance": 0.25
    }
  2. dynamic_load_balancing and resume functions are not working properly in 3D (It was checked to be OK for 2D). Problem was observed to be occurring in the halo exchange function for shared nodes between MPI rank. Error in halo exchange: image NOTE: For instance, at node 1050 (a shared node), solid and liquid mass are expected to be 0.01456 and 0.0024, respectively (they are not properly summed). Thus, producing this: image Error in dynamic load balancing at step 10: image