idaholab / moose

Multiphysics Object Oriented Simulation Environment
https://www.mooseframework.org
GNU Lesser General Public License v2.1
1.78k stars 1.05k forks source link

SubApp output is broken when sub_cycling and Picard is on #6270

Open andrsd opened 8 years ago

andrsd commented 8 years ago

The input files are checked in test/tests/multiapps/picard_sub_cycling. This test has a master app that takes 4 time steps, then there is a sub-app that takes 4 sub-cycles (each cycle has 10 steps). One would expect to see 4 steps in the sub-app output file, but there are only 2, the initial condition and the last time step.

If Picard is off and sub_cycling is on, we get the right number of time steps in the output file. If Picard is on and sub-cycling is off, then the output is also correct.

andrsd commented 8 years ago

This needs to be fixed for the sprint problem we are working on now, thus the critical label.

permcody commented 8 years ago

Is this problem related to execution or just Outputs?

andrsd commented 8 years ago

Just output, execution is fine as far as I can tell

permcody commented 7 years ago

Still broken for over a year. I guess it's not critical?

permcody commented 6 years ago

Hmm... I wonder... https://civet.inl.gov/job/173180/

friedmud commented 6 years ago

This is not an error - this is the correct behavior.

When not doing picard you can set output_sub_cycles = true to tell a sub-app to write out its intermediate timesteps... but that won't work with Picard.

The reason is that if the sub-cycles are written to the disk... then there is no way to "undo" them each Picard iteration (and you don't know which Picard iteration is the last iteration so that you only write them out then).

Think about CSV output. You get one output per timestep... if you are writing them out every sub-cycle then you will end up with many for one Picard iteration. THEN you restore the app back to the beginning of the step and do the next Picard iteration and write out a different set of CSV files for the second Picard iteration.

Now: you might think "but the second iteration will just overwrite the CSV files from the first iteration!"... but that is not necessarily true. The number of timesteps done in a Picard iteration can (and often does!) vary. There's no guarantee that subsequent Picard iterations will generate the same CSV files as the first Picard iteration.

There are only two options:

  1. Don't write out sub-cycles (what we do).
  2. Somehow keep track of all of the files (or modifications to files in the case of Exodus) that you create during sub-cycling and undo that (delete or un-modify) when the app is reset back at the beginning of each Picard iteration.

I deemed (2) ridiculously complicated... so implemented (1). This is not a bug.

permcody commented 6 years ago

Well that explanation does indeed make sense, but it's also kind of a bummer. This may be worth looking at in the future (maybe very near future). It's a fairly common and useful scenario. To say we just can't produce output may not fly as we move forward with more multiphysics coupling.

GiudGiud commented 1 month ago

There is a 3rd option that is not very complicated.

It s just writing the last sub cycle step solution at the end of the fixed point iteration. This last step coincides with the main app step