Closed benegee closed 1 month ago
This checklist is meant to assist creators of PRs (to let them know what reviewers will typically look for) and reviewers (to guide them in a structured review process). Items do not need to be checked explicitly for a PR to be eligible for merging.
NEWS.md
.Created with :heart: by the Trixi.jl community.
Attention: Patch coverage is 93.88464%
with 88 lines
in your changes are missing coverage. Please review.
Project coverage is 96.11%. Comparing base (
909abb4
) to head (bded959
). Report is 58 commits behind head on main.
:umbrella: View full report in Codecov by Sentry.
:loudspeaker: Have feedback on the report? Share it here.
What about other mesh types like the TreeMesh
? Does it print the local or global number of cells (see here)?
The TreeMesh
replicates all cell info on all ranks. Thus, it prints the global info.
@benegee Please note that you should also adapt the output of the AMR output, probably in these three functions: https://github.com/trixi-framework/Trixi.jl/blob/8a9fc7baeca9807de185592d5cb8d60040a24f09/src/callbacks_step/analysis.jl#L494 https://github.com/trixi-framework/Trixi.jl/blob/8a9fc7baeca9807de185592d5cb8d60040a24f09/src/callbacks_step/analysis.jl#L520 https://github.com/trixi-framework/Trixi.jl/blob/8a9fc7baeca9807de185592d5cb8d60040a24f09/src/callbacks_step/analysis.jl#L554
Otherwise we get a global element count but only rank-0 information on AMR, which is bound to cause confusion IMHO
True! I realized this in the meantime as well, but have not finished the MPI syncing of element counts.
True! I realized this in the meantime as well, but have not finished the MPI syncing of element counts.
Optimally, you'll use an implementation that only requires a single additional MPI_Reduce
call.
nodofsglobal
and nelementsglobal
are now used in the Base.show
output of semidiscretizations and meshes, and of the AnalysisCallback.
For DG
, nodofsglobal
already relied on nelementsglobal
:
and nelementsglobal
was already MPI aware:
For DGMulti
, ndofsglobal
was already MPI aware as well:
I now added nelementsglobal
in analogy.
New solver types would now have to implement ndofsglobal
and nelementsglobal
.
The local element numbers per level are now summed up across all ranks before being printed on rank 0. In order to not having to synchronize the minimum and maximum element levels in advance, I took this information from the AMRController. This currently works but would require new controllers to provide the same information. If you see a better solution, please give me a hint.
Thanks a lot for tackling this @benegee! This makes Trixi.jl much more usable in parallel 💪
Resolves #1616