TDycores-Project / TDycore

BSD 2-Clause "Simplified" License
4 stars 0 forks source link

Updates transient MPFA-O SNES demo #159

Closed bishtgautam closed 3 years ago

bishtgautam commented 3 years ago

Updates the demo to support dirichlet and nuemann BCs.

jeff-cohere commented 3 years ago

Is this functionality tested? If not, would it be easy to add a test for it?

bishtgautam commented 3 years ago

@jeff-cohere The automated testing is showing up as green when few tests are failing (see https://github.com/TDycores-Project/TDycore/pull/159/checks?check_run_id=2289017502#step:5:157)

bishtgautam commented 3 years ago

The issue could be the docker is using a PETSc hash that is causing the tests to fail.

jeff-cohere commented 3 years ago

The issue could be the docker is using a PETSc hash that is causing the tests to fail.

I think the issue is that we changed the shell behavior to continue running tests after failure, and the tester is probably only paying attention to the last status code emitted by the test harness, which probably isn't keeping track of all the errors.

So, there are two things we need to do:

  1. Update the Docker image to a version of PETSc that works with this PR (do you have a hash? Or should I just clone the latest PETSc?)
  2. Fiddle with the test scripts so that they keep track of failures, or fiddle with the auto tester workflow to make it keep track.

The home-grown Python script we're using to test everything isn't very flexible. Is it based off of buildbot or something? Or is it completely custom?

jeff-cohere commented 3 years ago

Okay, I've got the autotest workflow fixed so that it reflects our test failures. I also fixed some bugs in tests where we're trying to zero out small values (and some other compile warnings). For me, the failing test still fails when I build locally against the latest PETSc. Do you still think this is related to the PETSc docker image and not TDycore?

bishtgautam commented 3 years ago

All tests are passing with PETSc 03cacdc99d on my mac. I will check if tests are failing on Linux. For you, are the tests reporting a DIFF or not producing a regression file?

jeff-cohere commented 3 years ago

Here's what I see locally with PETSc's latest main branch:

Running tests from 'transient_snes_mpfaof90.cfg':
--------------------------------------------------
----------------------------------------
transient-snes-mpfaof90...
    cd /home/jeff/projects/pnnl/TDycore/demo/transient
    /home/jeff/projects/pnnl/TDycore/demo/transient/transient_snes_mpfaof90 -malloc 0 -successful_exit_code 0 -max_steps 10 -tdy_regression_test -tdy_regression_test_num_cells_per_process 2 -tdy_regression_test_filename transient-snes-mpfaof90 -pc_type lu
    # transient-snes-mpfaof90 : run time : 0.40 seconds
    diff transient-snes-mpfaof90.regression.gold transient-snes-mpfaof90.regression
    FAIL: Liquid Pressure:Max : 0.06576782838524149 > 1e-12 [relative]
    FAIL: Liquid Pressure:Min : 0.021626992297200217 > 1e-12 [relative]
    FAIL: Liquid Pressure:Mean : 0.007825355301193531 > 1e-12 [relative]
    FAIL: Liquid Pressure:0 : 0.021627615020401947 > 1e-12 [relative]
    FAIL: Liquid Pressure:7 : 0.00034342870296423083 > 1e-12 [relative]
transient-snes-mpfaof90... failed.
----------------------------------------

I'll test that PETSc hash, and if it works, I'll redo the Docker image.

jeff-cohere commented 3 years ago

Unfortunately, I get the same result with PETSc commit 03cacdc99d on Linux. Let me know if you see anything different.

bishtgautam commented 3 years ago

aargh ... i was testing the wrong branch. I'm getting the same error now and will debug it now.

jeff-cohere commented 3 years ago

Ok, cool. By the way, is there any reason we shouldn't just use make test in the autotest workflow, instead of separately running all the tests and demos explicitly? Does make test not cover everything?

bishtgautam commented 3 years ago

I borrowed Ben's python script used by PFLOTRAN's testing framework and that python script only takes one exe and runs all tests for it. make test calls that python script for the different demos one at a time, but if any demo test reports an error, the make bails out and doesn't run other tests. I'm open to ideas for fixing/modifying the make test.

jeff-cohere commented 3 years ago

Thanks for the explanation! I think what we have now works and is pretty easy to maintain. To me, doing "better" would require us to rethink our testing approach, and I think we've already got plenty to do at the moment.

bishtgautam commented 3 years ago

@jeff-cohere This is good to go now :)

bishtgautam commented 3 years ago

Let me add the new tests that you suggested in your review.

codecov-io commented 3 years ago

Codecov Report

Merging #159 (b621411) into master (8916d21) will not change coverage. The diff coverage is n/a.

Impacted file tree graph

@@           Coverage Diff           @@
##           master     #159   +/-   ##
=======================================
  Coverage   67.73%   67.73%           
=======================================
  Files           7        7           
  Lines        1221     1221           
=======================================
  Hits          827      827           
  Misses        394      394           

Continue to review full report at Codecov.

Legend - Click here to learn more Δ = absolute <relative> (impact), ø = not affected, ? = missing data Powered by Codecov. Last update 8916d21...b621411. Read the comment docs.

bishtgautam commented 3 years ago

A new test has been added.