MFlowCode / MFC

Exascale simulation of multiphase/physics fluid dynamics
https://mflowcode.github.io
MIT License
132 stars 56 forks source link

Following up on 385, 411 -- computes time per grid_point per equation per #420

Closed AiredaleDev closed 1 month ago

AiredaleDev commented 1 month ago

Description

Fixes #299 #385, follows up on #411

Type of change

Scope

How Has This Been Tested?

Test Configuration:

System76 Pangolin (2024 Model, CPU only)

AMD Ryzen 7840U: 8 Cores, 16 Threads 32 GB of LPDDR5 @ 6400 MHz

Checklist

If your code changes any code source files (anything in src/simulation)

To make sure the code is performing as expected on GPU devices, I have:

NOTE: While I changed the caption in the docs, I have yet to collect and document new performance metrics! Please let me know if anything else looks amiss.

sbryngelson commented 1 month ago

Did you try this locally? For me I get the following on simulation (running on 4 ranks, though 1 rank has same problem)

 [100%]  Time step      992 of 1001 @ t_step = 991
 [100%]  Time step      993 of 1001 @ t_step = 992
 [100%]  Time step      994 of 1001 @ t_step = 993
 [100%]  Time step      995 of 1001 @ t_step = 994
 [100%]  Time step      996 of 1001 @ t_step = 995
 [100%]  Time step      997 of 1001 @ t_step = 996
 [100%]  Time step      998 of 1001 @ t_step = 997
 [100%]  Time step      999 of 1001 @ t_step = 998
 [100%]  Time step     1000 of 1001 @ t_step = 999
 Final Time                       NaN ns/gp/eqn/rhs

and on preprocess I actually still get the old metric

+ mpirun -np 4 /Users/spencer/Downloads/MFC/build/install/b61184720f/bin/pre_process
 Pre-processing a 200x0x0 case on 4 rank(s)
 Processing patch           1
 Processing patch           2
 Final Time   2.0140000000000019E-003
sbryngelson commented 1 month ago

There are now two separate people contributing to two PRs on the same topic, can we please converge?