geodynamics / vq

Virtual Quake is a boundary element code designed to investigate long term fault system behavior and interactions between faults through stress transfer.
Other
12 stars 24 forks source link

Not writing final stress values to final sweeps #67

Closed kwschultz closed 9 years ago

kwschultz commented 9 years ago

It looks like VQ is not writing the final stress values to the HDF5 for the last sweep of an event.

We are hoping this is a one-off indexing error.

unwritten_stress

markyoder commented 9 years ago

(this issue appears to be resolved; see the rest of this and the next comment for details)

let's start with some questions: 1) "is this by design?"

no; corrections are under way.

2) in processStaticFailure(), is there a reason that the blocks are sampled differently for the post-Original and post-Secondary fail loops: original: for (it=sim->begin(); it!=sim->end(); ++it) {...} secondary: for (gid=0; gidnumGlobalBlocks(); ++gid) { ... } these formulations look ideitical (in function).

standardizing this looping reference in the code. 3) are we, possibly, only observing cases where slip<0, which we basically ignore? no.

if not, this problem is almost certainly to be solved in: RunEvent.cpp:RunEvent::processStaticFailure(), and presumably in the nested call to RunEvent::processBlocksSecondaryFailures(). basically, processStaticFailure() executes a call to processBlocksOrigFail(), which sets up the "original" failures (blocks' first-time failures) -- including setting initial-stress values. it then executes processBlocksSecondaryFail(), which reevaluates blocks that failed in previous sweeps.

SO, the main questions are: (>> and best-we-know, as per general discovery and good counsel from eric)>>) 1) what should the initial/final_stress values look like? how are they supposed to be updating?

these should probably be the same (si_j = sf_j-1), but this is still not the case. that said, note that we processBlocksOrigFail() and subsequently distribute [ sim->distributeUpdateField() ] before we (re)run processBlocksSecondaryFailures(), which would change the stress on all blocks -- including previously failed blocks. we then "process" the secondary failures, and the stress change indicates the change between the stress before and after processBlocksSecondaryFailure(). aka, if we consider the first block to fail (or any block's first failure), we have the following succession of stress states: (0): initial pre-failure stress, (1): primary post-failure stress (following initial failure), (2): new round of "original failures" changes all sorts of stuff around -- including this block, (3): secondary-failure of this (and other) blocks. (4): repeat (2),(3) until end-of-event.

so (in other words), for a given block:

2) are secondary failures properly contributing to the over-all stress distribution? is this a real stress distribution problem or just a reporting problem (aka, data are correct within the simulation but reporting incorrectly -- they report the same in hdf5 and text)?

this is at least half-fixed. we're catching stress changes; they may still not be correct. 3) what is the proper way to loop through the secondary failures? implementing the standard like {for (it=sim->begin(); it!=sim->end(); ++it) { BlockID gid=it->getBlockID() ...} 4) minor note/question: in RunEvent::processBlocksSecondaryFailures(), we loop through the whole list of local blocks to determine if they are candidates for secondary failure (if sim->getFailed() && {not in global_failed_events). would it be viable and faster to just look through the sweep_events list (which is not the proper name of this list on code, but the list of actual sweep events -- event_sweeps or pased sometimes as _sweeps (i think))? this looping and management of global/local ids will be revised and optimized.

markyoder commented 9 years ago

so this comes down to two main problems: 1) the code that writes the final stresses loops only over new "original failure" blocks, added to the rupture during the current sweep. this can be fixed a number of ways (commented in code for the time being). perhaps the most direct approach is to look through the element_sweeps list (list of sweep-event objects) directly.

status: this is patched together by sampling the element_sweeps list directly to determine which sweep events to update. if this is not a desirable approach, it could be revisited to construct a running list of global_failure events in which we can (nominally) separate primary/secondary failure type events by their sequence/index in the list. OR, we can go back to my original plan A:use two lists and duplicate a bit of code (because this is not Python, so we can't just say " in list_a + list_b". in any case, this logic could probably be made more efficient and code-pretty.

2) for which blocks to we setUpdateField()? for "original" failures, we setUpdateField() for all blocks that have not yet failed ( sim->setUpdateField(gid, (sim->getFailed(gid) ? 0 : sim->getSlipDeficit(gid))); ). for secondary failures, do we want to setUpdateField(!=0) for all blocks? all blocks except the ones that just failed (and so have experienced stress drops already, right?) we have three approaches: a) original: only update blocks not involved in the event. this is, of course, wrong and results in secondary stress drops=0 b) update ALL blocks during secondary sweeps: this runs ok (apparently), and the initial event statistics look ok (i don't know what to make of the stress drop patterns or what we should expect of the initial/final stresses). c) update all blocks EXCEPT the original-failure blocks for this sweep. again, this runs and the preliminary event stats look ok (for what it's worth), and i don't really know what to make of the initial/final stresses, etc.

BUT, philosophical discussions aside, (a) is clearly wrong, (b) fails two "make test" tests: 196 - test_two_slip_none_6000 (Failed), 201 - test_two_slip_none_3000 (Failed); (c) does not fail these two tests (both (b), (c) still show the 5 Greens test failures). SO, unless we've made an even number of corrections that mask a persisting problem, we can -- for the time being i think, call this fixed, merge it with the main code, and get to work on those greens function tests.

repaired code available on my GitHub under the "stress_bug" (or something like that) branch.

markyoder commented 9 years ago

this problem resolved as part of recent merge/pull request #73.