Closed bodhinandach closed 3 years ago
Merging #682 into develop will decrease coverage by
0.05%
. The diff coverage is68.29%
.
@@ Coverage Diff @@
## develop #682 +/- ##
===========================================
- Coverage 96.66% 96.61% -0.05%
===========================================
Files 123 123
Lines 25375 25412 +37
===========================================
+ Hits 24527 24551 +24
- Misses 848 861 +13
Impacted Files | Coverage Δ | |
---|---|---|
include/particles/particle.h | 100.00% <ø> (ø) |
|
include/particles/particle_base.h | 100.00% <ø> (ø) |
|
include/solvers/mpm_explicit.tcc | 58.16% <27.78%> (-5.04%) |
:arrow_down: |
include/particles/particle.tcc | 94.19% <100.00%> (+0.06%) |
:arrow_up: |
tests/mpm_explicit_usf_test.cc | 100.00% <100.00%> (ø) |
|
tests/mpm_explicit_usl_test.cc | 100.00% <100.00%> (ø) |
|
tests/particle_test.cc | 99.87% <100.00%> (+<0.01%) |
:arrow_up: |
Continue to review full report at Codecov.
Legend - Click here to learn more
Δ = absolute <relative> (impact)
,ø = not affected
,? = missing data
Powered by Codecov. Last update 3b20da5...055b987. Read the comment docs.
@kks32 I am okay. How you want to design or categorize them? I was thinking to call initialise
node, compute_mass_momentum
, and compute_velocity
so that it was a set as the one done initially. However, the computational time becomes even slower as you do more mappings. Please feel free to commit your change @kks32.
I am closing this PR and will reconsider the framework after looking at #685
Describe the PR This PR quickly adds the explicit MUSL solver capability in our
mpm_explicit
solver. I thought it will be useful as thempm::StressUpdate::MUSL
is available, though never be used. Due to its considerably slower performance, if you think the MUSL option is not necessary, let's remove the MUSL option.Additional context The MUSL option is basically similar to USL stress update, though require one more time momentum update right before recomputing strain_rate and strain for stress update. I ran the benchmark tests in both serial and parallel and both are ok! The necessity to do
map_momentum_to_nodes
and thecompute_velocity
one more time add a bit more computation time, so I am not sure if it is so useful. Here is the time comparison for the 3D hydrostatic column test in our benchmark test run in my local machine.